key: cord-0040836-pt0h50c8 authors: Chang, Victor; Muñoz, Víctor Méndez; Ramachandran, Muthu title: Emerging applications of internet of things, big data, security, and complexity: special issue on collaboration opportunity for IoTBDS and COMPLEXIS date: 2020-04-13 journal: Computing DOI: 10.1007/s00607-020-00811-y sha: 93ff90d3a00a051f42eb8de40f06c43c6f99ff21 doc_id: 40836 cord_uid: pt0h50c8 nan Today, Big Data, IoT and Analytics are driving and making the differences in key performing top organizations. The interplay of these three areas can be instrumental for the future development of research, complex systems and enterprises. IoT will be estimated to rise to billions of devices connected by 2020 [1] . This has huge implications for research, businesses and future activities for mankind. The development of this vision is pivotal to support and foster IoTBDS (iotbds.org). Sensors are used to extract an unprecedented amount of data, which can be filtered and processed in the IoT by machine networks, automated analysis, tools and systems. This can become a Big Data layer abstraction from the captured data if the process is well-organized. Hereby, complex information systems can obtain the benefits of collecting, processing and analyzing highly valuable data. This is one of the most important scopes of COM-PLEXIS (complexis.org). Therefore, starting from the pillars of IoT, Analytics and Big Data, we can build complex information architectures by using tools like social media, predictive modeling, insight analysis and sentiment analysis. Eventually, we can build three additional layers: complex data, information and knowledge, and offer service related to these three valued-added layers. The application fields are widespread in fields like social networking, financial services, or biological research. This special issue (SI) call is focused on contributions that explore and demonstrate related areas, approaches and recommendations for both IoTBDS and COMPLEXIS. In particular, research work and projects that offer solutions to the underlying prob- lems, or those overcoming challenges for different service layers and system scalability or recommendations that combine both theories and practices. Therefore, any pioneering methods, algorithms, system and software architectures, and any other proofing recommendations that offer improvements in organizational learning, as well as the latest research and development, have been welcome and used as the selection criteria. This can ensure our selected papers can give insights on data pillars in IoT, Analytics and Big Data, as well as the collaboration opportunities in the field of complex information systems to collect, process and analyze highly valuable data and high-level analysis. The objective of this SI is aimed to provide an active and valuable collaboration opportunity for scientists, practitioners and researchers in industry and academia. To meet this demand, we have followed double-blind and vigorous processes to select five best papers at the end. Selection is based on those criteria: suitability of the topics, quality of the presented work, research contributions, the recommendation of review feedback and excellent writing. The summary of each paper is as follows. Li. et al. [2] put forward a theory of reinforcement learning in financial strategy with a novel Deep Reinforcement Learning (RL) model for perception and feature extraction, then decision-making in stock prediction and trading strategies. The use of RL concepts in finance is common because of the ability to deal with high data velocity and conduct transactions, for example, in crypto-coins trading boots or big players trading systems. The novelty is in Deep Learning (DL) is to perceive the current market environment and provide feature learning automatically. Particularly, authors describe a DPR architecture in two layers, the raw data analyzed to dismiss extreme data, then processed to the input format of a Deep Q Network. DQN has a Target-Network to obtain the target value and a Current Q-Network, which is used to evaluate the current Q value. Training data is extracted randomly from a Replay Memory. As the Environment changes, the network parameters are updated, so Replay Memory will also change. When the optimal value of Q is achieved, decision-making is carried out ('stay', 'sell', or 'buy'). The model was implemented in python 3.5 Tensorflow deep-learning-framework experiment is designed with ten random stocks from 'Historical daily prices and volumes of all US stocks' from Kaggle (NYSE, NASDAQ, NYSE MKT). Each stock is split into a training set and testing set and then fed into three DRL methods: Deep Q-Network (DQN), Double Deep Q-Network (DDQN), and Dueling Double Deep Q-Network (Dueling DDQN). The paper shows the Loss and Reward charts to analyze the results of the three methods. The alternative approach with the AdaBoost algorithm is also compared and analyzed. The work of David Ralph et al. [3] is a paper in the data mining area, with applications in the field of the supply chain. The main contribution is a new Transitive Semantic Relationships (TSR) model, which infers potential relationships from user and item employing labeled big data. The proposed approach evaluation is demonstrating a high level of performance, even with extremely few labels in challenging cold-start and sparsity labeling data. This is an improvement of the existing recommendation systems, which allow a recommendation to be made without the direct intention of the user, and now, even with less existent user behavior data or item data. For demonstration purposes, the work examines the case of the supply chain on the Isle of Wight, showing results analysis for the examples of "Subset Labelling" and "Extremely Sparse Labelling". Proposed TSP is able to infer not obvious relationships between data items, based on an apparent transitivity property of many types of data items, where it is the case that items which are described similarly are likely to have similar relationships to other data items. This SI also has "tasty dishes of generalist work. Tariq Alsboui et al. [4] present a Distributed Intelligent approach that could be useful in different IoT scenarios. The authors propose a Mobile-Agent Distributed Intelligence Tangle-Based approach (MADIT), where Tangle IOTA is a distributed ledger platform to data exchange in large P2P networks, and MADIT enables distributed intelligence at two levels. Authors figure out how to take advantage of the IOTA platform of the blockchain field, to be used in the IoT. This is simplifying generic technical issues on scalability and data transfer, to focus on the abstraction layer of the problem domain. The main novelty of the approach is a high-level model to handle transactions, which in the experimental results promise speed and scalability improvements with mobile agents. A discussion section is describing existing solutions in Distributed Intelligence approaches in IoT. This is a value-added proposal fully decentralized nature of the mobile agent's model, which suits well with a range of practical near-line IoT scenarios. The work of Nandhini [5] has proposed novel research into a novel semi-fragile watermarking technique using Integer Wavelet Transform (IWT) and Discrete Cosine Transform (DCT) for tamper detection and recovery to enhance enterprise multimedia security. This enterprise multimedia security technique uses the generated recovery watermark to form the recovery tag, which is sent along with the watermarked image to the receiver. Similarly, at the receiver side, the proposed tamper detection technique is used for verifying the authenticity and identifying the attacks in the watermarked image. If the manipulations are identified as malicious, then the tampered parts in the received image are recovered using the proposed tamper recovery technique. This technique has been tested several times, which have produced a better PSNR (Peak Signal to noise ratio) for various watermarked images. The work by Standley et al. [6] has a novel technique using the Correlates of War (COW) Project historic war datasets to derive war probabilities. The CoW data reveal that combat fatalities follow a log-gamma or log-normal probability distribution depending if a state's strategy is offensive or defensive. Their finding suggests that minimum-risk decision thresholds be derived from defensive statistics. The method of computing the thresholds is illustrated using war statistics from NATO countries. The method of computing the decision thresholds implies that if a nation-state were to detect an imminent attack, it might then consider proactively attacking first. However, the conclusion from data is that it is almost always better to defend with no response. We are honored to complete this SI before the deadline with a high quality of selected papers and a vigorous review process maintained. We are grateful to the Editor-in-Chief and Springer to provide us the opportunity to serve the community. We will host the IoTBDS 2020 and COMPLEXIS 2020 via online streaming (due to COVID-19) between 7th and 9th May 2020, with the opportunity to have prestigious keynote speakers and other supporting journals. We will be delighted to serve the community again and maintain a high-quality reputation in the near future. Green IoT: An investigation on energy saving practices for 2020 and beyond Application of deep reinforcement learning in stock trading strategies and stock forecasting Recommendations from cold starts in big data Enabling distributed intelligence for the internet of things with IOTA and mobile agents A Novel semi fragile watermarking technique for tamper detection and recovery using IWT and DCT Fusing attack detection and severity probabilities a method for computing minimum-risk war decisions Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations