key: cord-0056183-xchfua8u authors: Carayannis, Elias G.; Christodoulou, Klitos; Christodoulou, Panayiotis; Chatzichristofis, Savvas A.; Zinonos, Zinon title: Known Unknowns in an Era of Technological and Viral Disruptions—Implications for Theory, Policy, and Practice date: 2021-02-10 journal: J Knowl Econ DOI: 10.1007/s13132-020-00719-0 sha: 2cf40f790c9598a9dc53825d6880c8d4b5dda5b2 doc_id: 56183 cord_uid: xchfua8u Technology is composed of the words “Techne” and “Logos” that refer to the artistic/creative and the logical/scientific aspects of its dualism. And so inherent this Promethean concept lie the concepts of the Schumpeterian creative destruction and also the promise and potential for humanity’s better tomorrows. We live in an era of artificial intelligence–driven as well as viral disruptions that challenge the mind as well as the body. At the same time, the impact of our pursuit of prosperity at any cost on the environment triggers displaced people floods and viral pandemics undermining the standard of living and more importantly the foundations of trust in institutions and in a better tomorrow feeding populist movements and autocratic trends in democracies as well as emboldening dictators. This work discusses the concepts of Risk Management 5.0, Industry 4.0, Industry 5.0, Society 5.0, Digital Transformation, Blockchain, and the role of AI via the Internet of Things architectures that could enable “smarter as well as more humane solutions to our challenges.” In this paper, we explore ways and means through which the environment and democracy could be better served via leveraging the creative potential of the human mind and the opportunities created by innovation and entrepreneurship enabled by and culture did not have any tools to deal with Risk Management; their fate was depending on their Gods' hands (Costa 2018; Covello and Mumpower 1985) . Until now, we really do not know if their outcomes were effective but it was the best thing they could do to deal with uncertainties that were affecting their targets. This type of pre-historic Risk Management endured until the mid-seventeenth century when we had the birth of Risk Management 1.0. During this period, Blaise Pascal and Pierre de Fermat designed Probability Theory (Ore 1960) and altered the way we think and act with regard to uncertainties. From the prehistoric days until the 1940s, many years have passed and various approaches and processes were developed which provided the opportunity to people to create predictions and forecasts and even take decisions based on those predictions, something which was not possible until that time. During that epoch, risks that could lead to business losses were introduced and considered dangers and threats to business. In the 1940s, we have the actual discovery of Risk Management 1.0 (Stulz 1996) . After the World Wars, we witnessed the influential era of Risk Management 2.0; during this period, the first scientific studies were published, theories were written, and concepts were defined (Dionne 2013) . With the evolution of many Sciences such as Mathematics and Engineering as well as other emerging technologies, the world changed, so various risks once considered relevant turned into something irrelevant. From World Wars until the 1990s, various practices like the common sense and trial and error were swapped by quantitative analysis and at the same time regulation mechanisms were developed by institutions and governmental authorities (Young 1981) . Subsequently, in this period, many researchers were awarded with Nobel Prizes for their studies in Risk Management (Michel-Kerjan 2008; Alexander 2009 ). This epoch is considered to be Risk Management 3.0 (Dionne 2013) . At the beginning of the new millennium and perhaps also due to the end of Cold War, we faced the beginning of modern globalization as well as the start of a huge technological explosion (De Vries 2010). These two facts led to the increase of Project Risk Management and the introduction of Enterprise Risk Management (Lam 2014 ) that aimed to help firms reach their goals. In this period, Financial Institutions were required to reinvent their line of business, new approaches and procedures were developed, and more tools were demanded by stakeholders and customers. At this time, cyber-attacks and ecological disasters happened, giant techs appeared suddenly (Hendershott 2004) , and individuals recognized that negative and positive risks are in our daily lives so you have to manage them; otherwise, your strategies will not go as planned. At this period, we have the introduction of Risk Management 4.0, a more mature version of Risk Management that lasts until now. The risk management as we know it today consists of the following steps (KukrejaI 2020; Kloosterman 2014): 1. Identify the circumstances: it is important to identify the circumstances in which a risk appears before it can be noticeably assessed and mitigated. 2. Risk identification: is the approach of recognizing the particular risks associated with the threats already identified? 3. Risk assessment or risk evaluation: this step comprises the understanding of numerous risks identified and the determination of how hazardous the certain risks are. 4. Risk control: once the risk is assessed, it has to be controlled. Various control metrics are implemented and documented. 5. Monitor and review: the final step encompasses understanding the impact of the control mechanisms developed on the threat and the risk it poses. Presently, something is happening once again and a new tactic for Risk Management is occurring (Costa 2018; Dionne 2013) . The way people look at the future will be reshaped, so what follows? Firms and organizations will comprehend the real values of Risk Management; they will start using Risking Management to add extra values to their products for the benefit of their customers. Business processes will be reshaped and people will not just administer risks. The whole Risk Management processes will be conducted by risks, and every single organizational choice will target to minimize threats and maximize opportunities. Organizations will become agile and viable in order to reach their goals. New methods to exchange daily information will emerge and will become more widespread. Data will be processed quickly and AI tools will be utilized to forecast future events to aid decision-makers (Costa 2018; Dionne 2013; Aziz and Dowling 2019) . Nowadays, regulations are also affecting decisions (Araz et al. 2020) . Everyone is more loyal to laws and contracts since people are not showing willingness to allow behaviors that do not agree with or expose themselves to risks. In the future, sustainability will be more present and obligatory. Currently, as an outcome of the Covid-19 (Gasmi et al. 2020) pandemic, everything worldwide is changing. All these changes and effects are revolutionizing Risk Management and there is no way we could space from this. People have two choices facing. Passively wait and become outdated or make it occur and get there first. Roll your dice (Costa 2018) . Welcome to Risk Management 5.0. Nowadays, the concept of Risk Management 5.0 as outlined in the previous section can be implemented into Industry 4.0 so as to develop new standards that could lead to Society 5.0 and Industry 5.0. As described in Manrique (2019) , in Industry 4.0 the generation of knowledge and intelligence is done by humans with the help of technology. In Society 5.0 the generation of knowledge will come from machines through Artificial Intelligence at the service of people. The rise of the new digital industrial technology, known as "Industry 4.0," made its first appearance in 2011 in a German strategic initiative as a part of its high-tech program and later defined by Kagermann et al. (2011) and Henning (2013) as "a new type of industrialization." Presently there is no any consensus within the literature on how to correctly define Industry 4.0 (Piccarozzi et al. 2018; Hofmann and Rüsch 2017) even if its execution and operation is at the center academic, political, and governmental interests. Germany was among the first countries through its "High-Tech Strategy 2020" that promoted Industry 4.0 and granted access to millions of euros of funding in the development of vastly innovative and emerging technologies in the production field. Subsequently, other administrations have begun to promote various plans and actions at national levels to errand Industry 4.0 adoption by firms (Liao et al. 2017) . Some examples include, but are not limited to, the Advanced Manufacturing Partenership (AMP) promoted by the US government in 2011, the "Nouvelle France Industrielle" program introduced by France in 2013, the long-term framework outlined by the UK government specifically for its manufacturing sector namely "The future of Manufacturing", and the "Piano Industria 4.0" designed exclusively for Italian companies investing in digital transformation. Mechanization, electricity, and IT were the results of the first three industrial revolutions, while Industry 4.0 introduces the Internet of Things and Services into the manufacturing ecosystem. Industry 4.0 endorses high economic impacts for both businesses and consumers and promises increased operational effectiveness and efficiency, as well as, new business models, processes, services, and products (Hermann et al. 2016) . One of the core challenges in describing Industry 4.0 originates from the various labels (Industrial Internet, Internet of Things, smart factories, Human-Machine-Cooperation, smart manufacturing) which are currently being used to indicate similar and in some situations same phenomena that are dealing with the application of digital and interconnected technologies to the manufacturing zone. Burritt and Christ (2016) stated that Industry 4.0 is an umbrella term used to call a group of connected technological improvements which are used to increase firms' digitalization. Hermann et al. (2016) detected four elements of Industry 4.0: Cyber-physical systems (CPS), Internet of Things (IoT), Internet of services (IoS), and smart factory. CPS are systems that actually bring the physical and the virtual worlds together (Hofmann and Rüsch 2017) by incorporating computation, networking, and physical approaches. In a manufacturing framework, CPS involves smart machines, storage systems, and production amenities able of autonomously exchange data, initiate actions, and control each other independently (Kagermann et al. 2011) . Machines and other devices that are used in production lines can collect real-time data and use those data to make decisions such as prioritization of production orders, optimization of tasks, and maintenance (Lee et al. 2014) . Their application in manufacturing procedures presents a whole new level of control, transparency, efficiency, and flexibility. The Internet of Things (IoT), or the Internet of Everything which was first presented in (Ashton 2009 ) as the phenomenon of enhancing new technologies (e.g., Internet) to everyday objects, participates in the processes of Industry 4.0. Nowadays the IoT term has a wider sense and includes a network of things-which are called "internetconnected constituent"-coupled to each other by any form of wireless sensors, actuators, and mobile phones (Nitti et al. 2017 ) and can be used to provide information about their environment, context, location etc. (Ng and Wakenshaw 2017) . Based on this definition, the physical objects now become "intelligent objects" and have the possibility to talk thanks to the Internet. Similarly, to IoT, the Internet of Services (IoS) permits service vendors to offer their services via the Internet and, consequently, add value to their existence. New Web technologies, for example services-oriented architecture (SOA), software as a service (SaaS), or business process outsourcing (BPO), enabled the escalation of new business models where a party can grant temporary access to the resources of a different party to execute a prescribed task. Human workforce and skills, technical systems, information, consumables, and others are some of the resources that can take part in the process (Hofmann and Rüsch 2017) ; presently, firms are shifting from the offering products to the offering of integrated productservices, a phenomenon that in literature is called "servitization" (Rymaszewska et al. 2017) . The mixing of IoT and IoS in a CPS enables the birth of "smart factories" (Jiang 2018) . Smart Factory can be defined as a factory where CPS communicate over the IoT and IoS, which assists persons and machines in the execution of specific tasks (Hermann et al. 2016) . In smart factories, human beings, machines, and other resources communicate with each other as naturally as in a social network (Jiang 2018) . By equipping manufacturing with sensors, actuators, and autonomous systems, Industry 4.0 helps factories in becoming more intelligent, flexible, and dynamic (Kamble et al. 2017) . Xu et al. (2018) identified various technologies which go beyond the aforementioned four components of Industry 4.0 but also can be considered part of it such as the following: cloud computing (Velte et al. 2009 ), additive manufacturing, wearables, big data, augmented reality applications, wireless network, and smart cities. Smart cities are cities that connect the physical, IT, social, and business infrastructures to leverage the intelligence of the city's community (Hollands 2008) and to support added-value services for citizens. Interdisciplinary is another element of complexity that exists in Industry 4.0 since Industry 4.0 touches different fields such as engineering, computer science, information technology, manufacturing, human resources, environmental science, and consumer behavior. Piccarozzi et al. (2018) stated in their literature that Industry 4.0 is a cross-cutting theme of many disciplines that influence each other. Even today, it is tough to find a research paper dedicated to the managerial and business aspects of Industry 4.0 due to the fact that in every domain the business aspect blends with the aspects pertaining to technical engineering, ICT, or sustainability. Societies and Businesses are currently becoming increasingly conscious of the capabilities derived by applying new emerging technologies; this allows them to gain long-term competitiveness, adapt more dynamically to consumer alterations and ecological requirements, optimize decision-making, increase productivity and effectiveness, and, finally, create new valuable opportunities by introducing new services. However, some reports identified various factors that can either foster or hinder the adoption of Industry 4.0 by different companies. Müller et al. (2018) acknowledged three different prospects that serve as antecedents: strategic opportunities (new models for businesses, new value offers for greater competitiveness), operational opportunities (efficiency, decreased costs, better quality, improved speed and flexibility, load balancing and stock reduction), environment and people opportunities (reduction of monotonous work, age-appropriate workstations, reduction of ecological impact). On the other hand, Müller et al. (2018) find three core barriers: future viability and competitiveness (existing business models endangered, flexibility loss, standardization, transparency); organizational and production fit (high operation efforts with regard to costs and standardization); employee qualification and acceptance (employees' anxiety, fear and concerns, lack of know-how). One of the main challenges in Industry 4.0 is how to do with the role of human resources in the digital revolution (Horváth and Szabó 2019). On one side, new technology could essentially escalate labor shortages, reduce human work, and allow organizations to allocate human resources to higher value-added capacities. On the other side, digital revolution which requires dynamic competencies and the acquisition of knowledge and expertise from outside the organization is essential and needs to be taken into account when referring to the human resources within Industry 4.0 (Sivathanu and Pillai 2018; Hecklau et al. 2016) . To conclude, among the many driving forces of sustainable practices, Industry 4.0 technologies are becoming more and more important since they can facilitate the growth of green manufacturing procedures, green supply chain management, and also green products (Green et al. 2012) . Industry 4.0 is considered to be the digital transformation of manufacturing and related industries; thus, in this section we present the general concept Digital Transformation. During this era, Digital Transformation (DT) has acknowledged an emergent notice both by academics and experts; however, despite various scholars having addressed this topic, a joined definition of DT is still missing (Morakanyane et al. 2017 ). One of the core reasons for this lies in the fact that DT understanding requires an interdisciplinary approach (Hausberg et al. 2018) . Hausberg et al. (2018) emphasize that although various works are focused on the technological aspects of the digital transformation, the "human" component is fundamental as well. On the one hand, there are studies that consider technology as the main driver of this "radical change" (Morakanyane et al. 2020 ); on the other hand, there are people who describe digital technologies as an enabling factor for a new organizational shift (Nambisan et al. 2019; Morakanyane et al. 2020 ) that impacts society and people, as well as, knowledge management (Urbinati et al. 2020) . Hausberg et al. (2018) during their in-depth research on DT identified that big data is the research stream with most contributions while AI and machine learning are those technologies that have a substantial presence. Moreover, among the various streams identified by authors in Hausberg et al. (2018) , one was "society," which consists of mechanisms that deal with the digital technologies' role in the following matters: Society and communication, Policy and international, Philosophy and ethics (Colli et al. 2020; Vial 2019) . Thus, society, particularly relevant in this study, is characterized by a multidisciplinary approach that takes into consideration the DT from a societal point-of-view, with an actual focus on opportunities, and also risks, connected to the big data adoption. In the following sub-section, we outline how Industry 4.0 can be changed and extended into Society 5.0 that could truly lead to Industry 5.0. Under the push of technological development, Industry 4.0 extends its effect to the entire society, which is considered a larger ecosystem. On the one hand, the digitization procedure consists of a series of metrics attributable to the optimization of production processes within industries (supply chain management, manufacturing, and production in smart factories etc.); on the other hand, the digital transformation applies a restructuring of the socio-cultural patterns centered on the most dissimilar technological innovations (Nambisan et al. 2019). At the root of this expansion, the idea of Society 5.0 (or "Super Smart Society") is defined. This prototypical viewpoint initiated in Japan and was outlined as the main concept in the "Fifth Science and Technology Basic Plan" introduced by the Japanese "Council for Science, Technology and Innovation". Society 5.0 was acknowledged as an overall development strategy for Japan, and was recapped in "The Investment for the Future Strategy 2017: Reform for Achieving Society 5.0." Fundamentally, Society 5.0 provides a mutual societal infrastructure for prosperity based on advanced service platforms. Industry 4.0 follows Society 5.0 to a certain degree, but while Industry 4.0 emphasizes on production, Society 5.0 aims to place human beings at the midpoint of innovation, exploiting the impact of technology and Industry 4.0 results with the technological integration to improve quality of life, social responsibility, and sustainability (Onday 2019) . This ground-breaking perspective is not restricted only to Japan, but it has common points with the objectives of the UNDP SDGs ("United Nations Development Program" "Sustainable Development Goals"). In addition, unlike the concept of Industry 4.0, Society 5.0 is not obliged only to the manufacturing industry, but it solves social problems with the aid of physical and virtual spaces integration. In general, Society 5.0 is the society where the advanced IT technologies (IoT, Artificial Intelligence, Blockchain, etc.) are aggressively used in peoples' life, industry, health care, and other provinces of activity not for the progress, but for the advantage and convenience of each person (Fukuyama 2018) . In the near future era of Society 5.0, cybernetics will encounter with "Evergetics," as the emerging neoclassical science of bilateral management approaches in society. Evergetics derives from the Greek word (Eυεργέτης) that means "benefactor"; already in its etymological origin we distinguish an emphasis for "good actions" in management processes and decision-making. The author of this neologism expressed it as "... the science of management processes organization in a developing society, each member of which is interested in augmenting his cultural heritage he is producing, which entails a raise of cultural potential of the society as a whole and, as a consequence, an increase in the proportion of moral and ethical managerial decisions and corresponding to them benevolent actions in public life" (Vittikh 2014 (Vittikh , 2015 . It is clear that to safeguard that the implementation of Society 5.0 is not just a politicalideological theory, it is necessary to integrate numerous dimensions, such as innovation policies (government point-of-view), entrepreneurial attitude (society perspective), and entrepreneurial skills (civil society and institutions perspectives). A significant perception that links with the social, cultural, and economic aspects is the one of Industry 5.0, which can be considered the answer to the question of a renewed human centric industrial architype, starting from the (cultural, managerial, organizational, philosophical, and structural) restructure of an industry's production processes. The importance of this new perspective originates by the fact that Industry 4.0 is just at the early stage of development and that its main achievements can be expected not earlier than 2020-2025. Moreover, the responsible/irresponsible, ethical/ unethical normative decisions and policies that describe Industry's 4.0 global governance do not take into consideration the real impact of such issues. That is the reason why the dialogs on Industry 4.0 and Society have inclined to focus on either a dystopian fearful future shaped by the IoT in which robots ("CoBots") with AI replace humans, or a future that will consistently be benevolent and prosperous for all. Both foresights subscribe, to technological determinism (evolution in organizational behavior and structures, acceptance of robots in the workplace, work ethics, discrimination against robots or people, privacy and trust in a human-robot collaborative work environment etc.) (Demir et al. 2019) , and even if the rise of Industry 4.0 and its societal impacts are predetermined, they do not yet acknowledge the need to broaden the Industry's 4.0 understanding and its possible futures in society (Özdemir and Hekim 2018) . Skobelev and Borovik (2017) stated that, if the main devotion of an industrial revolution is devoted to the technological aspects of its implementation and the human being, with his mental and creative abilities, the only key that dominates the joint discussions agenda is based on the negative changes of labor market caused by the Industry 4.0. This is why the following question arises "How can people and society benefit from Industry 4.0?" It is of great importance that scientists and engineers strengthen their efforts in Innovation management having in mind a new mentality guided by Design Thinking and aligned with the idea of "absorptive capacity" (Badding et al. 2014; Matlay et al. 2009 ). Design thinking might be the key to the wicked problem of innovation. Stacey and Lauche (2005) explain the wicked problem as a complex and open-ended challenge and propose design thinking as a solution. Numerous authors have highlighted the importance of design thinking for modifying the innovation management structure to construct an ecosystem designed specifically for the IoT and Industry 5.0 era with a focus on human/user centeredness (Fauquex et al. 2015; Taratukhin and America 2018; Buchmann et al. 2018) . Similarly, Özdemir and Hekim (2018) presented the role of design thinking in Industry 5.0, which is more human-centric compared to Industry 4.0. Design thinking supports the connection of innovation and technological policies with the corporate strategy of a business; thus, it creates a suitable ecosystem for IoT and Industry 5.0. In addition, the Organization for Economic cooperation and Development firstly introduced the concept of "implement-ability" of innovation. Innovation should create value for its users; if innovation is not generating any value or does not bring any change in users' lives, then it cannot be considered a true innovation. The concept of implement-ability of innovation puts the customer/user at the center of the entire innovation management process. Since the employee, in a highly technologized and machine-assisted context, would be "the user," finding the right structure that would allow humans and machines to interact by merging various human characteristics in the original design at different levels is challenging. A way to achieve this is by identifying the prevailing roles of each party. On the one hand, human beings are better at interactions, intuition, and complex decision-making. On the other hand, machines surpass humans on pattern identification and recognition, data processing, and data search. Taking the above statements into consideration, it should be noted that future machines and smart devices will improve human life and work (Ellitan and Anatan 2019; Stojkoska and Trivodaliev 2017). Artificial intelligence is considered to be one of the core pillars of Industry 4.0 and eventually Industry 5.0. The question that arises is on how Artificial Intelligence can aid innovation. In general, an idea or an invention cannot be labeled as innovative, only by being creative or original. Innovation is mainly a corporate, economic, or business event, manifested with technological and organizational terms. Innovation implies something more radical (Beran et al. 2019 ) and elevates the result into a product, aimed to be consumed. Under this notion, radical innovation (Simon et al. 2003) leads to a complete reversal of com-mon views. It defines the beginning of a new field with commercial extensions, including high-risk efforts which, apart from implementation problems, may also face consumers' disbelief. A classic example of such innovation is the electric light bulb. Electricity and all related phenomena were deeply examined by several scientists in the past, through a large number of scientific publications and patents. However, it was Thomas Edison who first presented the electric lightbulb as a product, while at the same time making it commercial. The light bulb was a product of radical innovation, which came into the market even before it got optimized, thus marking the beginning of the electric light era. Nothing will be the same in the future, not even the man's fear of the dark. The light bulb enables the installation of domestic and city lights and makes night transportation a lot easier and safer, while at the same time enabling the establishment of nightshifts on production lines. In this case, innovation reduces the darkness-related uncertainties through light. In general, innovation and risk management seemingly do not naturally go hand-inhand, although this work would argue that they should. A radical innovation identifies not only an opportunity but also an uncertainty or a risk and provides a product to eliminate it. After the electric bulb managed to gain consumers' trust, a new era for researchers has begun. Several scientists and inventors are now working to improve further and enhance the product. For many years, products of incremental in-novation (Tontini et al. 2014 ) appeared in the market. At first, there were light bulbs with an extended lifetime, then with different luminosity. All those products mainly presented improvements in the initial product. A part of this incremental innovation can also be regarded as the fact that as time passed, modern production lines were developed to reduce production and delivery cost. What-ever is the case, though, the importance of incremental innovation products does not overshadow the product proposed as a radical innovation. The light bulb with different luminosity is still called a light bulb, in honor of the initial product. What has changed over the years is the increased consumer trust in the product, while the uncertainties are minimized, since the risk of the bulb to fail is getting minimum. Incremental innovation, in this case, improved the trust towards the product, eliminating any doubts which in turn lead to uncertainties regarding the wellestablished electric light bulbs. After many years, the light bulb became an indispensable element of human survival, until a new architecture emerges, which evolves the electric bulb on a new level. LED lights are an innovation in terms of architecture, while the result remains the same: a device that is artificially producing light, in different luminosity, different colors, and low consumption. However, a completely new, improved architecture takes place, being the new standard of light bulb manufacturing. The improvement is significant, as the new architecture dramatically reduces costs and increases the lifetime of the device. Yet, similar to the case of incremental innovation, this type of evolution is not enough to overshadow the initial innovation, the electric light bulb. However, consumers' trust in new-generation electric bulbs can now be taken for granted. The architectural innovation (He and Sarpong 2020) played a notable role in the wide adoption of the product. Suddenly, the light bulb becomes intelligent. It can now be controlled over a smartphone, change its color according to the type of music playing in the room, and automatically adapt its luminance depending on the external lighting conditions to create the ideal atmosphere inside the room. It can also detect suspicious movements or actions and inform the police and our doctor. The new light bulb lives with us and tries to improve our life quality. Of course, it is not a product of radical innovation, but a product of disruptive innovation (Christensen et al. 2015) . Its goal is not to mimic the traditional light bulb, but to reveal a new market. In the past, the light bulb was responsible for minimizing the darkness-related uncertainties. The disruptive innovation aims at the uncertainties that occurred by the typical light bulb's commercial establishment. A low-end disruption is a technology that enters the market with lower performance than the incumbent but exceeds the requirements of specific segments of that market. Thus, the technology disrupts that market, although not a radical technology (White and Bruton 2010) . Open-source software and libraries are classic examples of potentially disruptive technologies. Probably the most well-known disruptive technology is Artificial Intelligence (AI). Artificial intelligence is defined as the scientific field which studies those systems capable of assisting or even replacing a human during a specific task. Those systems comprise of tools that perform either physical activities or even decision-making components. Under this scope, artificial intelligence can be either be a machine or software, whose functionality implies a certain level of intelligence. Artificial intelligence is an important technology that supports daily social life and economic activities (Lu et al. 2018; LeCun et al. 2015) . Today, AI became a part of our everyday life, mainly because of our nature imposed it. Our needs to further improve our working conditions, improve our productivity, and ensure a better life quality, urged us to develop technologies which apt to the AI field. Artificial intelligence assists our every step. Smart devices help us reach our destination; help our children to sleep; propose movies, songs, and shows; read articles; and answer to questions/interviews (Fig. 1) . Modern medicine minimizes medical mistakes and improves the precision of surgeries. Artificial prosthetic limbs provide opportunities to improve an individual's life, while low-cost diagnostic tests can prevent heart attacks and strokes. Research in AI has focused chiefly on neural networks and cognitive mechanisms. In terms of the technological aspect, those mechanisms were known for many decades. However, modern machine learning methods have evolved and dramatically improved. Global human knowledge is transferred to computers to improve their intelligence. For example, it is now enough to have a collection of photographs in which humans manually identified cats. Those photographs are "fed" to the machine and through a neural network-based mechanism; the machine can identify the correlations and features and is now capable of identifying cats in new, unknown photographs. Users provide to the machine a set of photographs depicting humans, and the machine learns to identify the existence of humans in a new photo stream. The neural network technology is not new; however, the increased number of photographs uploaded by social network users every day, the enormous expansion of the internet, and of course the phenomenal increase of computer power allowed for their evolution. Today, neural networks are the core components of deep-learning technology (Lu et al. 2018; LeCun et al. 2015) . Deep learning is the technology trying to simulate the human way of thinking, and it is based on the functionality of the human brain. It allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Deep learning has brought about breakthroughs in several scientific fields and technologies, allowing us to develop applications that could be regarded as science-fiction in the past. Deep learning is making significant advances in different scientific fields. Using deep learning, scientists manage to capture the first image of a Black Hole. Deep learning made a significant impact on the improvement of autonomous cars, making them more reliable. There are a significant number of deep learning applications, many of which are already part of our everyday life. Each time we take a selfie, or control with our voice the devices in our home or office, and each time we get a recommendation from our favorite TV platform, a deep learning network is underlying. A network trained by our choices, which continues to be trained based on our decisions.AI technology can reduce risk and uncertainty in many industrial and development aspects. It can predict unwanted and unexpected situations. It can identify the parameters which may probably affect the risk of a mission or a task. For example, by observing an investment, AI can evaluate how external factors can define the success or the failure of the investment. AI tracks stocks, exchange rates values, and a country's economy. It can predict and inform for forthcoming threats, monitor our health through smartwatches and wearable devices, and can prevent strokes and heart attacks. Soon, deep learning will get control of our cars, minimizing the risk of accidents. It takes care of heavy duties in the warehouses, eliminating the risk of severe injuries. And this is only the beginning. Although a lot of research was conducted all these years on Artificial Intelligence and deep learning, there are still lot of open issues (Beris 2019 ) that need to be addressed. 1. Neural network opacity: to explain to professionals why a network came to the specific output and conclusion. . Ensuring data quality: deep learning needs lots of high-quality data to become adept at performing tasks. 3. Data security: huge amounts of sensitive real-world business data are needed to serve the needs of stakeholders. Blockchain technology is considered to be the second main pillar of Industry 4.0, but how can it truly be extended in Industry 5.0? A second concern that arises now is on how decentralized technologies could be utilized in Industry 5.0. Web 1.0 was primarily introduced back in the 1990s (Berners- Lee et al. 1994) with the commercialization of the Internet (Weis 2010) as a tool for storing, transferring, linking, and sharing content/data between users around the world, which are connected over the Internet layer (Berners- Lee et al. 1994) . The Web has led significantly to the e-disruption of many different sectors of applications such as trade, education, and business. Many consider the Web as a large-scale distributed system consisting of billions of Websites or services created by uncoordinated actions of millions of users which are linked together (Lawrence and Giles 1998) . Easy, fast, immediate access to large volumes of content/data that is complemented by the development of applications and services describes some of the collective advantages offered by the Web (Aghaei et al. 2012) . The rapid development of Web applications led to the constantly increasing demand of large amounts of data (Witten and Frank 2002; Laender et al. 2002) . Data exists in various forms and structures, starting from raw, unstructured data to structured databases accessible by Web portals, applications, and service (Witten and Frank 2002; Laender et al. 2002) . Data have always been the key source for enabling computational in attempting to understand knowledge or infer new knowledge from existing. However, the Web 1.0 has mostly contributed to static Web content and isolated data sources. A great challenge remains with the integration of many, dynamic and heterogeneous sources of data. A continuous effort towards that direction has led to the creation of many specialized tools and methods to support the integration and organization of data, e.g., to provide quality search results (Bosak 1997; Christodoulou et al. 2014; Ansari et al. 2000; Witten and Frank 2002) and navigate to linked, relevant to the user sources of information. The quest to dynamic Web content has led to Web 2.0 which was mainly to explore the development of high-quality, dynamic user content and services with the use of Application Programming Interfaces (APIs). Daily practices in many areas of our lives, such as education or health or the financial sector or even a person's personal life, have been greatly affected by the rapid development of the Internet, as well as, by services and applications it offers (Aghaei et al. 2012) . The endless participation of users in the Web through the applications and services provided, such as social networks or online stores, which are some of the main sources of content generation on the Web, contributed to the flourishing of Web 2.0 (Murugesan 2007) . In Web 2.0 the evolution of the Internet and the active participation of users is supported by both hardware and software. The era of Web 2.0, which is characterized by Social Networking (Thackeray et al. 2008 ) and real-time access to information, has two aspects that developed simultaneously and affected each other's evolution. The first aspect concerned with the existence of interoperable applications and services offered to users, with the aim of facilitating the posting of material, evaluations, judgments, and comments and in general to create a climate for users to build "communities" (such as blogs or social networks) (Fu et al. 2008) . The second aspect is the evolution of the required technologies. To make these applications work, technologies are being developed to support their development and operation requirements (such as, User Interface languages, RESTful architectures, meta-data languages, and Web application development technologies/frameworks). In addition, with the spread of the Semantic Web idea (Antoniou and Van Harmelen 2004) as proposed by Tim Berners-Lee, back in 1999: "I have a dream for the Web [in which computers] become capable of analyzing all the data on the Webthe content, links, and transactions between people and computers. A "Semantic Web," which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy, and our daily lives will be handled by machines talking to machines." In reality, the vision of Semantic Web, considered by many the Web 3.0 vision, was set to make all data available as machine readable data building a Web of Data: a Web in which computational agents can process and interpret data, as well as, humans. A first attempt towards that vision was introduced with the enrichment of Web content with meta-data (such as tags and keywords). The Semantic Web's main vision continued to evolve followed with a more pragmatic approach with the proposal of Linked Open Data and technologies that build on the Resource Description Framework (RDF) and with the enrichment of data with meta-data languages (i.e., Ontologies), in an attempt to provide human readable meaning to data but for machines. This will eventually enable computational agents to analyze existing open data from the Web and to provide more personalized answers/services to end users (Antoniou and Van Harmelen 2004) . This enhancement proposal for the Web has led to a more integrated Web of Data where data sources are linked both at the instance and semantic levels (Christodoulou et al. 2015) . Following from this, we are moving into the era of the Intelligent Web (Aghaei et al. 2012; Preece and Decker 2002) where AI is leveraged to build more proactive and more functional Web applications. In addition, the explosion in the number of interconnected devices with IoT is also driving this evolution. The blend of "objects" (in the form of physical sensors) with humans, along with data and the virtual interconnections created on the Web (Gubbi et al. 2013) , enable smart algorithms to provide personalized services to serve various areas of our daily activities. The main question that arises nowadays is if users are the owners of their generated data (Kaisler et al. 2013; Voshmgir 2019a, b ). The answer is definitely No. Data in the forms of leads are used by user-profiling algorithms, recommendation systems, and AI algorithms in gene developing personalized services. It is evident that there is lack of trust in the Web layer (Chung and Paynter 2002) . After so many years of internet mass adoption, data architectures are still based on centrally client-server storage and management (Berners- Lee et al. 1994; Hall and Hall 1994) . Every time we get online and interact with various webpages/applications, copies of our personal data are sent to the internet service provider and every time this happens we lose more control (Voshmgir 2019a, b; Chung and Paynter 2002) . Our data are centrally stored everywhere: on computers, hard drives, use sticks, cloud servers etc. The issue that arises is the following: can we trust the organizations, companies, and people that store and manage our data against any exploitation which may happen inside or outside the organization or by accident? This question not only yields concerns about user's privacy but also produces a lot of anxieties about the backend client-server infrastructure and data controlling of organizations (Voshmgir 2019a, b; Chung and Paynter 2002) . Clientserver infrastructures have a lot of limitations (Hall and Hall 1994) . Firstly, only administrators of the central system can view actions taken and, secondly, records can be deleted or altered, and if logs are edited the changes cannot be detected as even logs can be changed. In addition, to the aforementioned limitations, client-server architecture's trust always relies on the organization reputation and there are various problems with data backups and synchronization across multiple database centers. Taking the above perspective into account, Blockchain (Nakamoto 2008; Barcelo 2014) is the mainspring that can lead to the next-generation Internet, what we do refer to as decentralized web or Web3 (Mondéjar et al. 2013; Benet 2014; Somin et al. 2018) . Blockchain (Nakamoto 2008; Barcelo 2014) changes the ways that data is collected/saved and managed as it provides a more secure layer people can trust. Users can send data in a secured way using utilizing peer-to-peer (p2p) transactions (Gupta et al. 2003; Yang and Garcia-Molina 2002) without any third-party interaction. Peer-topeer (p2p) computing or networking (Schollmeier 2001 ) is a distributed application architecture (Mullender 1993; Tanenbaum and Van Steen 2007) that partitions tasks or workloads between peers. Peers are computer devices which are connected to each other via the Internet. In a p2p distributed system peers are equally privileged and participate with the same way in the application. In a decentralized web, infrastructure data (Mondéjar et al. 2013; Benet 2014; Somin et al. 2018 ) is saved in multiple copies in each one of the nodes of the p2p network (Golle et al. 2001; Schollmeier 2001) . p2p networks are not something new. Some popular examples of p2p applications which are/were not operating on a Blockchain framework are BitTorrent (Qiu and Srikant 2004) , Kazaa (Good and Krekelberg 2003) , and Tor (Macrina 2015) . So what makes Blockchain special? Blockchain increased trust and security of p2p users as the same copies of data are stored in every device which is connected to the node; thus, users cannot alter or change the data that are representing within the network and at the same time tackled some of the main limitations of p2p networks, such as the missing/infected nodes or malware software (Schollmeier 2001) . In a Blockchain network (Nakamoto 2008; Barcelo 2014) , the rules and how nodes are acting are formalized within the protocol and secured by consensus mechanism of all participants. Blockchain (Golosova and Romanovs 2018) can be seen as the fundamental part of Web3 (Mondéjar et al. 2013; Ali et al. 2017) as it reshapes data structures that exist at the end of the Web. Blockchain is introducing a governance layer which runs on top of the existing Web. It permits people who never meet or spoke to trust each over to create, sign, and settle contracts over the Internet. Blockchain can be described as a World Web Ledger as it can be used for recording and verifying transactions so that other people cannot alter them later-and anyone can see them. According to Merriam-Webster, Blockhain is "a digital database containing information (such as records of financial transactions) that can be simultaneously used and shared within a large decentralized, publicly accessible network." The lane towards a decentralized web has started with a single whitepaper back in 2008 that presented Bitcoin (Nakamoto 2008) . Currently, there has been an explosion in the implementation of various blockchainbased ecosystems in an attempt to evolve the Blockchain technology (Golosova and Romanovs 2018) and expand its capabilities in terms of interoperability and execution of machine code. Recent research from the community is concerned on improving or adapting consensus algorithms from distributed systems to blockchain networks (Bano et al. 2017; Raval 2016; Antonopoulos and Wood 2018) and expanding the use of the technology in other application domains (Zinonos et al. 2019; Christodoulou et al. 2018; Christodoulou and Christodoulou 2020a, b; Christodoulou et al. 2020) as well as in the creation of Blockchain-based tokens that could run on a Blockchain network (e.g., ERC-20 (Somin et al. 2018) ), the development of Non-Fungible Tokens (NFTs) (Entriken et al. 2018) , and the rise of decentralized finance (Chen and Bellavitis 2020) . Although a lot of research was conducted all these years on distributed ledger technologies and especially Blockchain, there are still lots of open issues that need to be addressed. 1. Scalability: nodes that validate transactions are required to download the entire Blockchain into their machine, and this could lead to a problem in the long-term. 2. Energy consumption: the majority of Blockchain frameworks require high energy consumption for validating and verifying transactions. 3. Transaction costs: the cost for deploying smart contracts as well as the cost for executing transactions on Blockchain systems can be really high in some cases especially when there is increased usage of the network. 4. Transaction speed: most of the public Blockchain networks can only process limited transactions per second. Internet of Things could be the third main pillar of Industry 5.0. During the last few years, IoT has seen a tremendous interest as new smart devices and sensors were developed to support the growth of Industry 4.0. Nowadays, as we are living into a continuously transforming world, IoT need to be adjusted into the new concept of Industry 5.0. IoT is one of the few disruptive technologies that will greatly affect various fields of world's economies. The Internet of Things (IoT) was first introduced at Massachusetts Institute of Technology (MIT) Auto-ID Labs in 1999 by Kevin Ashton with the idea of creating a global standard system for RFID and other sensors. Since then, lots of definitions have been proposed to explain what IoT is and its operations. The common features of most of the definitions are the terms "things," "connectivity," and "data." In simple words, IoT is a network of connected low power devices that collect data. Having this in mind, we can study IoT using these three components. The first component is the devices or the so-called things. The things are everyday physical entities such as smart thermostats, smart lambs, smart locks, smart watches, temperature and humidity sensors, and more others that became connected to the digital world. These devices could collect data and interact with the environment. For example, they can measure the temperature of the room and they can trigger an alarm in case the temperature does extremely high due to a fire. In addition, they are consisted of hardware and software and they have a specific purpose generally meant to do one single task. The second component is connectivity or communication. As the Internet of Things is growing very rapidly, there are many heterogeneous devices connecting to the Internet. To enable the communication between heterogeneous IoT devices, proper protocols are required. Obviously, the selection of the proper communication protocol is based on the type and the application requirements. For instance, such requirements are the low data rate, long range, low energy consumption, and low cost. In order to fulfill these requirements, a selection of technologies that belong to LPWAN (Low Power Wide Area Network) must be taken. LPWAN is one of the basic wireless data transport protocols for the implementation of IoT. The Low Power Wide Area Network technologies that are expected to concentrate more than the 90% of the market by 2023 are LoRa, SigFox, Narrow-Band IoT, and LTE-M. The third component is data and therefore the applications of IoT. Without the existence of an application, data collected are useless. The IoT is becoming useful in numerous applications and services. The ability of an IoT system to collect huge volume of data by different types of networked devices fosters the IoT applications market and created a variety of applications for different domains. Examples of application domains are smart agriculture, utilities, auto manufacturing, asset tracking, smart cities, smart logistics, and smart buildings. But What's Next? According to a CISCO report, 2016, by 2030 around 500 billion devices are expected to be connected to the Internet. It is also clear that IoT is already an integrated part of our lives and the impact of the IoT on citizens, businesses, and governments will be significant. Now is the time to move one step forward. Until now we use the word "smart" to characterize the components of the IoT. For example, we use "Smart Cities" or "Smart devices." It is the time to move from smart IoT devices to intelligent IoT devices. But what does this mean? It means that IoT devices are not only connected and programmed to perform pre-defined actions, but they have the intelligence to decide in real time on behalf of the users based on users' preferences. To achieve that, the integration of IoT devices with enabling technologies is required. The two enabling technologies to support the transition from smart to intelligent are the technologies of 5G (Palattella et al. 2016 ) and Artificial Intelligence. Although loT research has been conducted in the area, there are still lots of open issues that need to be addressed. The major challenges are the following: 1. Data security and privacy: due to the high number of data collected, there is a huge concern about the privacy of the personal data (Lin et al. 2017; Christodoulou et al. 2020; Dorri et al. 2017) . This is strongly related with lot of hackers' attacks to IoT systems. Therefore, more technical solutions are needed to support the security of the data. It is also challenging how companies offering IoT services will consider GDPR regulation (Badii et al. 2020 ). 2. Energy consumption: communication protocols must be able to minimize the signaling and overhead (Ababneh and Al-Karaki 2020) . This will optimize the power consumption of the devices and prolong their operational period. 3. Massive connectivity of billions of devices: the infrastructure must support the connectivity of billion devices. 4. Interoperability: due to the extremely huge number of devices, a standardization for seamless communication between the different IoT devices is required. These devices may run different hardware or software. 5. Over the air firmware updates: this is an important functionality that must be supported due to the huge number of devices. With firmware updates, all the devices can be updated over the air without the need to have physical access to the device. By doing that, you minimize the maintenance and configuration cost and at the same time you increase the security of the devices since major updates are performed usually to increase the security levels of the devices (Anastasiou et al. 2020 ). This work explained at length the concepts of Risk Management 5.0 and Industry 4.0 and presented how implementing Risk Management 5.0 standards to Industry 4.0 could lead the world to Society 5.0 and Industry 5.0. In Society 5.0 the generation of knowledge will come from machines through Artificial Intelligence at the service of people so the move towards Industry 5.0 implies a more human-centric design and operationalization approach (Carayannis and Campbell 2019) . Figure 2 briefly presents the interconnections that exist between the concepts described in this work as well as the three main pillars of Industry 5.0. Although a lot of research was conducted all these years on Artificial Intelligence, Blockchain, and Internet of Things, there are still open issues/challenges that need to be addressed before we fully moved into Industry 5.0. Table 1 briefly presents the current open issues of the three main pillars of Industry 5.0. The "Smarter" IoT around which extends and evolves a planetary-wide-and even space-based-web of technological, institutional, and social modalities will serve as the building block of the next-generation entrepreneurship and innovation ecosystems. These will prove "smarter" in terms of both effective efficiency (Carayannis and Campbell 2019; Campbell and Carayannis 2016; Carayannis et al. 2017 Carayannis et al. , 2018a Koldbye and Carayannis 2020; Carayannis and Campbell 2011) as well as the capacity for empathy and a more humane and "greener" disposition. In addition, the democratic nature of modern knowledge economies and societies will be better served and further safe-guarded by Blockchain and AI-enabled modalities and self-managing robotic artifacts as well as decentralized tools and applications that could help identify and filter out fake news (Christodoulou and Christodoulou 2020a, b) , populist tendencies, and other threats against the society and the environment. In the same context, AI-enabled agnostic viral diagnostic tools could help further safeguard the world from the spread of highly contagious lethal viruses that are only to further Internet of Things Data security and privacy, energy consumption, massive connectivity of billions of devices, interoperability, over the air firmware updates appear and invade our world given economic growth trends and environmental challenges. We should be developing and deploying Blockchain and AI-enabled devices at the micro and nano levels to serve as front-line sentries in this disruptive new world, and we should be doing so by designing and embedding in these systems attributes of the proper position, posture, and performance (Carayannis et al. 2018a, b) as well as possessing the qualities of being patient, persistent, persevering, proactive, predictive, preventive, and pre-emptive (Carayannis and Campbell 2019) . On the lifetime analytics of IoT networks Evolution of the world wide web: from WEB 1.0 TO WEB 4.0 From Markowitz to modern risk management Blockstack: A new decentralized internet. Whitepaper IoT device firmware update over LoRa: the Blockchain solution A semantic web primer Mastering ethereum: building smart contracts and dapps Data analytics for operational risk management Internet of Things strategic research roadmap Machine learning and AI for risk management Models of thinking: assessing the components of the design thinking process Smart city IoT platform respecting GDPR privacy and security aspects Consensus in the age of blockchains User privacy in the public bitcoin blockchain Ipfs-content addressed, versioned, p2p file system What is innovation in the area of medicines? The example of insulin and diabetes Deep learning/AI challenges in 2019 and how to work around them The world-wide web XML, Java, and the future of the Web Streamlining semantics from requirements to implementation through agile mind mapping methods Industry 4.0 and environmental accounting: a new revolution? The academic firm: a new design and redesign proposition for entrepreneurship in innovation-driven knowledge economy Open innovation diplomacy and a 21st century fractal research, education and innovation (FREIE) ecosystem: building on the quadruple and quintuple helix innovation concepts and the "mode 3" knowledge production system Conclusion: Smart Quintuple Helix Innovation Systems The Quintuple Helix innovation model: global warming as a challenge and driver for innovation Trans-disciplinarity and growth: nature and characteristics of trans-disciplinary training programs on the human-environment interphase Composite innovation metrics: MCDA and the Quadruple Innovation Helix framework The ecosystem as helix: an exploratory theory-building study of regional co-opetitive entrepreneurial ecosystems as Quadruple/Quintuple Helix Innovation Models Blockchain disruption and decentralized finance: the rise of decentralized business models What is disruptive innovation Developing more reliable news sources by utilizing the Blockchain technology to combat Fake News A decentralized voting mechanism: engaging ERC-20 token holders in decision-making Applying hard and fuzzy k-modes clustering for dynamic web recommendations Combining syntactic and semantic evidence for improving matching over linked data sources A decentralized application for logistics: using blockchain in real-world applications Health information exchange with blockchain amid COVID-19-like pandemics Privacy issues on the Internet Digital transformation strategies for achieving operational excellence: a cross-country evaluation Risk Management 5.0 Risk analysis and risk management: an historical perspective The limits of globalization in the early modern world. The Economic History Review Industry 5.0 and human-robot co-working Risk management: History, definition, and critique Blockchain for IoT security and privacy: the case study of a smart home Erc-721 non-fungible token standard. Ethereum Foundation Creating people-aware IoT applications by combining design thinking and user-centered design methods Empirical analysis of online social networks in the age of Web 2.0 Society 5.0: aiming for a new human-centered society Individual risk management strategy and potential therapeutic options for the COVID-19 pandemic Incentives for sharing in peer-to-peer networks The advantages and disadvantages of the blockchain technology Usability and privacy: a study of Kazaa P2P file-sharing Green supply chain management practices: impact on performance Internet of Things (IoT): a vision, architectural elements, and future directions A reputation system for peer-to-peer networks Technical foundations of client/server systems Digital transformation in business research: a systematic literature review and analysis. Proceedings of DRUID18 The role of knowledge creation modes in architectural innovation Holistic approach for human resource management in Industry 4.0 Net value: wealth creation (and destruction) during the internet boom Recommendations for implementing the strategic initiative Design principles for industrie 4.0 scenarios Industry 4.0 and the current status as well as future prospects on logistics Will the real smart city please stand up? Intelligent, progressive or entrepreneurial? City Driving forces and barriers of Industry 4.0: do multinational and small and medium-sized companies have equal opportunities? An improved cyber-physical systems architecture for Industry 4.0 smart factories Industrie 4.0: Mit dem Internet der Dinge auf dem Weg zur 4. industriellen Revolution. VDI nachrichten Big data: issues and challenges moving forward Security attacks and secure routing protocols in RPL-based Internet of Things: Survey What are the 5 risk management steps in a sound risk management process Democracy and the environment are endangered species What is Risk Management? What are the 5 Risk Management Steps in a Sound Risk Management Process? Management Study HQ A brief survey of web data extraction tools Enterprise risk management: from incentives to controls Searching the world wide web Deep learning Service innovation and smart analytics for industry 4.0 and big data environment Past, present and future of Industry 4.0-a systematic literature review and research agenda proposal A survey on internet of things: architecture, enabling technologies, security and privacy, and applications Brain intelligence: go beyond artificial intelligence The Tor browser and intellectual freedom in the digital age Industry 4.0 and Society 5.0 by Christian Manrique Evolving knowledge integration and absorptive capacity perspectives upon university-industry interaction within a university Toward a new risk architecture: welcome to risk management 2.0 CloudSNAP: A transparent infrastructure for decentralized web deployment using distributed interception Conceptualizing digital transformation in business organizations: a systematic review of literature Determining digital transformation success factors Distributed systems What drives the implementation of Industry 4.0? The role of opportunities and challenges in the context of sustainability Understanding Web 2.0. IT Professional The digital transformation of innovation and entrepreneurship: progress, challenges and key themes The Internet-of-Things: review and research directions IoT Architecture for a sustainable tourism application in a smart city environment. Mobile Information Systems Japan's society 5.0: going beyond Industry 4.0 Pascal and the invention of probability theory Birth of industry 5.0: making sense of big data with artificial intelligence Internet of things in the 5G era: enablers, architecture, and business models Transition from the triple helix to N-tuple helices? An interview with Elias G. Carayannis and David FJ Campbell Industry 4.0 in management studies: a systematic literature review Intelligent web services Modeling and performance analysis of BitTorrent-like peer-to-peer networks Decentralized applications: harnessing Bitcoin's blockchain technology IoT powered servitization of manufacturing-an exploratory case study A definition of peer-to-peer networking for the classification of peer-to-peer architectures and applications Managers at work: how do you best organize for radical innovation? Research-Technology Management On the way from Industry 4.0 to Industry 5.0: from digital manufacturing to digital society Network analysis of erc20 tokens trading on ethereum blockchain Thinking and representing in design A review of Internet of Things for smart home: challenges and solutions Rethinking risk management Distributed systems: principles and paradigms The future of project-based learning for engineering and management stu-dents: towards an advanced design thinking approach Enhancing promotional strategies within social marketing programs: use of Web 2.0 social media Which incremental innovations should we offer? Comparing importance-performance analysis with improvement-gaps analysis The role of digital technologies in open innovation processes: an exploratory multiple case study analysis Understanding digital transformation: a review and a research agenda Evergetics problems. Problemy Upravleniya Heterogeneous Actor and Everyday Life as Key Concepts of Evergetics. Group Decision and Negotiation Token economy: how blockchains and smart contracts revolutionize the economy Tokenized Networks: Web3, the Stateful Web Commercialization of the Internet The management of technology and innovation: a strategic approach Data mining: practical machine learning tools and techniques with Java implementations Industry 4.0: state of the art and future trends Improving search in peer-to-peer networks Quantitative analysis of qualitative data ParkChain: an IoT parking service based on blockchain Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations