key: cord-0611919-lb3gdwuh authors: Dorofeev, Vladislav; Trokhimchuk, Petro title: Computer sciences and synthesis: retrospective and perspective date: 2022-01-26 journal: nan DOI: nan sha: 2dda2f1a9ef95b7ab9281e4971cd0580835e3d35 doc_id: 611919 cord_uid: lb3gdwuh The problem of synthesis in computer sciences, including cybernetics, artificial intelligence and system analysis, is analyzed. Main methods of realization this problem are discussed. Ways of search universal method of creation universal synthetic science are represented. As example of such universal method polymetric analysis is given. Perspective of further development of this research, including application polymetric method for the resolution main problems of computer sciences, is analyzed too. Polymetric analysis is based on idea of triple minimum (particularly scientific, methodical and mathematical). Main principles of PA are criteria of reciprocity and simplicity. The first criterion is the principle of assembling the elements of the corresponding construct into a single system. Second criterion is principle optimality (simplicity complexity) of this assembling. One of main component of this method, hybrid theory of systems (theory systems with variable hierarchy) show that only ten minimal types system of formalization the knowledge are existed [13, 14] . Therefore methods of PA as universal theory of optimal formalized synthesis may be used for the resolutions the main cybernetical problems. Structure of PA may be represented as more deep formalization the neuronets [1] . Therefore PA as universal system formalization of knowledge may be used for the resolution the basic problems of natural and artificial intelligence too [3, 13, 14] . The bonds of Polymetric Analysis and computer sciences are shown. The using of polymetric method for the resolution the problems of cybernetics and artificial intelligence is discussed. Perspective of using synthetic methods for the development of computer sciences is analyzed too. Cybernetics (from the Greek κυβερνητική "governance," κυβερνώ "to steer, navigate or govern," κυβερνη "an administrative unit; an object of governance containing people") is the science of general regularities of control and information transmission processes in different systems, whether machines, animals or society [6, 16, 22] . Cybernetics studies the concepts of control and communication in living organisms, machines and organizations including self-organization. It focuses on how a (digital, mechanical or biological) system processes information, responds to it and changes or being changed for better functioning (including control and communication). Cybernetics is an interdisciplinary science [4 -7, 16, 22, 28 -35] . It originated "at the junction" of mathematics, logic, semiotics, physiology, biology and sociology. Among its inherent features, we mention analysis and revelation of general principles and approaches in scientific cognition. Control theory, communication theory, operations research and others represent most weighty theories within cybernetics [22] . In ancient Greece, the term "cybernetics" denoted the art of a municipal governor (e.g., in Plato's Laws) [22] . H. Ampere (1834) related cybernetics to political sciences: he defined cybernetics ("the science of civil government") as a science of current policy and practical governance in a state or society) [6, 22] . B. Trentowsky (1843) viewed cybernetics as "the art of how to govern a nation" [14, 22] . In its "Tektology" (1925), A. Bogdanov examined common organizational principles for all types of systems. In fact, he anticipated many results of N. Wiener and L. von Bertalanffy [6] , as the both were not familiar with Bogdanov's works [6] . The modern (and classical!) interpretation of the term "cybernetics" as "the scientific study of control and communication in the animal and the machine" was pioneered by Norbert Wiener in 1948 , see the monograph [16] . Two years later, Wiener also added society as the third object of cybernetics [22] . Among other classics, we mention William Ashby [22] (1956) and Stafford Beer [22] (1959), who made their emphasis on the biological and "economic" aspects of cybernetics, respectively. The latter case covers partial "intersection" of these results (see Fig. 1 -figuratively speaking, the central rode of the "umbrella"), i.e., usage of common results for all component sciences. Furthermore, we will adhere to this approach over and over again for discrimination between the corresponding umbrella brand and the common results of all component sciences in the context of different categories such as interdisciplinarity, systems analysis, organization theory, etc. [22] . Cybernetics today (disciplines included in cybernetics in the descending order of their "grades" of membership, see Fig. 1 , with year of birth if available) [22] : -control theory (1868-the papers published by J. Maxwell and I. Vyshnegradsky); -mathematical theory of communication and information (1948 -C. Shannon's works); -general systems theory, systems engineering and systems analysis; -optimization (including linear and nonlinear programming; dynamic programming; optimal control; fuzzy optimization; discrete optimization, genetic algorithms, and so on); -operations research (graph theory, game theory and statistical decisions, etc.); Intelligence); -data analysis and decision-making; -robotics and others (purely mathematical and applied sciences and scientific directions, in an arbitrary order) including systems engineering, recognition, artificial neural networks and neural computers, ergatic systems, fuzzy systems (rough sets, grey systems, etc), mathematical logic, identification theory, algorithm theory, scheduling theory and queuing theory, mathematical linguistics, programming theory, synergetics and all similar sciences [22] . According to [4] cybernetics is synthesis of many sciences (mathematics, physics, biology, psychology and other Fig. 2 . A diagram that roughly illustrates the areas of intersection of the main disciplines that feed cybernetics [4] . Really this synthesis is more widely. For specific systems, this synthesis is quite general and therefore, as a rule, it is detailed. Therefore, in our times the new particular synthetic science, which are based on cybernetics are created. These particular synthetic sciences were called as biological cybernetics, economical cybernetics, physical cybernetics etc [4, 33] . There is one further argument which we will consider and that is that a computer, or any other can be made adaptive so that they change and function in changing circumstances [4, 5] . The problem of not being able to do anything really new also relates in some measure to Lady Lovelace's objection which was really dependent upon the idea that the computer, or any other "machine" which was manufactured by human beings, could do no more than the programmer programmed it to do [4] . Schema of Fig. 1 shows the composition and structure cybernetics in historical way of its development. We see that artificial intelligence is the son and daughter of cybernetics, but it development in last years allow to select and represent the synthesis in this engineering science as separate paragraph. Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: "Can machines think?" [4] . Turing's paper "Computing Machinery and Intelligence" (1950), and its subsequent Turing Test, established the fundamental goal and vision of artificial intelligence [4] . At its core, artificial intelligence (AI) is the branch of computer science that aims to answer Turing's question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines. The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted. The major limitation in defining AI as simply "building machines that are intelligent" is that it doesn't actually explain what artificial intelligence is? What makes a machine intelligent? AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry [4, 23] . According to N. Moiseev "Anyway, the term "artificial intelligence" established in the scientific literature, and with this it follows to be considered. However, it is important to clearly stipulate pragmatic, applied meaning of this term. So we agree to associate its use only with modern processing technology and use information" [17] . In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is "the study of agents that receive percepts from the environment and perform actions" [23] Norvig and Russell go on to explore four different approaches that have historically defined the field of AI [23]: 1.Thinking humanly 2.Thinking rationally 3.Acting humanly 4.Acting rationally The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting "all the skills needed for the Turing Test also allow an agent to act rationally" [23] . Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as "algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together" [23] . While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with machine learning and other subsets of artificial intelligence. 1. Reactive Machines. A reactive machine follows the most basic of AI principles and, as its name implies, is capable of only using its intelligence to perceive and react to the world in front of it. A reactive machine cannot store a memory and as a result cannot rely on past experiences to inform decision making in real-time. Perceiving the world directly means that reactive machines are designed to complete only a limited number of specialized duties. Intentionally narrowing a reactive machine's worldview is not any sort of cost-cutting measure, however, and instead means that this type of AI will be more trustworthy and reliable -it will react the same way to the same stimuli every time. A famous example of a reactive machine is Deep Blue, which was designed by IBM in the 1990's as a chess-playing supercomputer and defeated international grandmaster Gary Kasparov in a game. Deep Blue was only capable of identifying the pieces on a chess board and knowing how each moves based on the rules of chess, acknowledging each piece's present position, and determining what the most logical move would be at that moment. The computer was not pursuing future potential moves by its opponent or trying to put its own pieces in better position. Every turn was viewed as its own reality, separate from any other movement that was made beforehand. Another example of a game-playing reactive machine is Google's AlphaGo. AlphaGo is also incapable of evaluating future moves but relies on its own neural network to evaluate developments of the present game, giving it an edge over Deep Blue in a more complex game. AlphaGo also bested world-class competitors of the game, defeating champion Go player Lee Sedol in 2016. Though limited in scope and not easily altered, reactive machine artificial intelligence can attain a level of complexity, and offers reliability when created to fulfill repeatable tasks. Limited memory artificial intelligence has the ability to store previous data and predictions when gathering information and weighing potential decisions -essentially looking into the past for clues on what may come next. Limited memory artificial intelligence is more complex and presents greater possibilities than reactive machines. Limited memory AI is created when a team continuously trains a model in how to analyze and utilize new data or an AI environment is built so models can be automatically trained and renewed. When utilizing limited memory AI in machine learning, six steps must be followed: Training data must be created, the machine learning model must be created, the model must be able to make predictions, the model must be able to receive human or environmental feedback, that feedback must be stored as data, and these these steps must be reiterated as a cycle. There are three major machine learning models that utilize limited memory artificial intelligence:  Reinforcement learning, which learns to make better predictions through repeated trial-anderror.  Long Short Term Memory (LSTM), which utilizes past data to help predict the next item in a sequence. LTSMs view more recent information as most important when making predictions and discounts data from further in the past, though still utilizing it to form conclusions  Evolutionary Generative Adversarial Networks (E-GAN), which evolves over time, growing to explore slightly modified paths based off of previous experiences with every new decision. This model is constantly in pursuit of a better path and utilizes simulations and statistics, or chance, to predict outcomes throughout its evolutionary mutation cycle. Theory of Mind is just that -theoretical. We have not yet achieved the technological and scientific capabilities necessary to reach this next level of artificial intelligence. The concept is based on the psychological premise of understanding that other living things have thoughts and emotions that affect the behavior of one's self. In terms of AI machines, this would mean that AI could comprehend how humans, animals and other machines feel and make decisions through self-reflection and determination, and then will utilize that information to make decisions of their own. Essentially, machines would have to be able to grasp and process the concept of "mind," the fluctuations of emotions in decision making and a litany of other psychological concepts in real time, creating a two-way relationship between people and artificial intelligence. Once Theory of Mind can be established in artificial intelligence, sometime well into the future, the final step will be for AI to become self-aware. This kind of artificial intelligence possesses humanlevel consciousness and understands its own existence in the world, as well as the presence and emotional state of others. It would be able to understand what others may need based on not just what they communicate to them but how they communicate it. Self-awareness in artificial intelligence relies both on human researchers understanding the premise of consciousness and then learning how to replicate that so it can be built into machines. According N. Nillson [19 -21] artificial intelligence, broadly (and somewhat circularly) defined, is concerned with intelligent behavior in artifacts. Intelligent behavior, in turn, involves perception, reasoning, learning, communicating, and acting in complex environments. Al has as one of its longterm goals the development of machines that can do these things as well as humans can, or possibly even better. Another goal of artificial intelligence is to understand whether it occurs in machines or in humans or other animals. Thus, artificial intelligence has both engineering and scientific goals. N. Nillson is focused main attention on the important concepts and ideas underlying the design of intelligent machines. 1. Foundations: evolving processes and their representation as data, information and knowledge; brain information processing, evolutionary computation; quantum inspired computation; molecular information processing; information theory; computational architecture. 2. ECOS and SNN methods: artificial neural networks (ANN) and evolving connectionist system (ECOS); spiking neural networks (SNN) methods; ANN and ECOS computational methods; SNN methods; evolving SNN (eSNN); brain inspired SNN (BI-SNN) and the design brain inspiredartificial intelligence; SNN, eSNN, BI-SNN parameter optimization with evolutionary computation (EC). 3. Applications: deep learning and deep knowledge from brain data; audio-and visual information processing; bioinformatics data modeling; SNN for neuroinformatics and personalized modeling; predictive modeling in ecology; predictive modeling in transport; predictive in environment. 4. Future directions: brain-computer interfaces with BI-SNN; affective computation; neuromorphic systems; new spike-time information theory for data compression; integrated quantumneurogenetic-brain-inspired models; towards integrated human intelligence and artificial intelligence. Now we represent the model of hybrid super intelligence proposed in [25, 26] , which is based on N. Moiseev's idea of complex modeling of the noosphere [17] in order to predict the consequences of any external influences on it, including anthropogenic ones [26] . The proposed strong hybrid intelligence architecture is shown in Fig.3 [25, 26] . The explanation of schema of Fig. 3 is next [25]: 1. The real world is information about the problem area, collected using the sensors available to the system. The real world includes both real world objects and automated transactional information processing systems. For example, in the case of a pandemic, contact tracing systems, medical information systems with patient data, etc. 2. People -a team of specialists involved in solving a problem. The team may include subject matter experts, developers, information system operators, etc. 3. Artificial intelligence is an adaptable and developed system for the automated collection and processing of real-world information with an interface for communication with a group of experts, including in natural language. 4. Modeling system -a set of systems for modeling and forecasting the real world with the capabilities of scenario analysis of the consequences of impact on the real world. In [27] , this architecture was proposed to solve the COVID-19 problem. A simple experiment was carried out to expand the epidemiological SIRD-model with the parameter "The level of immunity of the population". A similar architecture was used on a larger scale in Canada [27] . As a real world, it used a contact monitoring system and medical information systems with patient data. Artificial intelligence tools were used to refine the parameters of the model system based on real-world data. The modeling system made it possible to carry out scenario analysis of the development of the situation. An international team of specialists ensured the development and use of the system. In a sense, the proposed architecture can be considered an extension of the concept of "digital twins", developed in the world since 2002, to include the human factor. And at present, the main successes in the field of solving complex problems are associated with precisely such systems [25, 26] . The architecture of the proposed system itself assumes its transparency and controllability, since the consequences of the supposed impacts on the real world are checked on the modeling system. But taking into account the complexity of the real world and the limited possibilities of methods for its modeling, there are no complete guarantees of the safety of such a system. The following principles can be used to improve security. 1. Legasov's principle: a technostructure uncontrolled by society threatens the global security of mankind. The first principle concerns the problem of control of the technostructure, which controls the system of strong artificial intelligence, by the society. It appeared on the basis of V. Legasov's notes on the circumstances of the Chernobyl accident [25] . In relation to the proposed system of strong hybrid intelligence, it means the following: transparency and availability of the information systems used. 2. Efremov's principle [25] : anthropogenic changes in the environment, the rate of which exceeds the physical, biological and social mechanisms of adaptation to them, carry the risks of destroying life. The second principle is related to the ability of living systems to adapt to changes in the external environment, which the strong intelligence system plans to carry out. If the speed of adaptation is not enough, then it is better to refrain from making changes. It was formulated on the basis of an episode with a visit by earthlings to the planet Zirda from the book "The Andromeda Nebula" by I. Efremov [25] . Only poppies remained on the planet due to the underestimation of the danger by its inhabitants of the use of low-power ionizing radiation, contributing to a high rate of uncontrolled mutations. 3. Moiseev's principle [25] : the complexity of the models of the world should be comparable to the complexity of the problems they are designed to solve. The third principle is designed to prevent the use of primitive models to solve complex problems. It was formulated on the basis of the works of N. Moiseev [7, 17] . Despite some advances, current modeling capabilities of real-world systems are limited in their ability to accurately reproduce the real world. And often researchers forget about this and go far from the real world in their theoretical constructions. As we see, a tendency of differentiation is characterized AI too. Represented concepts are show this picture. As we see, main synthetic concepts of artificial intelligence (Nillson, Russel and Norwig, Kasabov and Moiseev) have more inductive and inductive with elements deduction nature and haven't general value for all computer sciences. It may be used for the concrete problems ov computer sciences and depending from level of development of modern electronics and information technologies. Therefore, we must search method, which may be represented as universal system formalization of knowledge, including computer sciences [1 -17, 36 -38] . This method should be deductive in nature and include not only the rules of logical formalization, like the Leubniz-Russell-Klini-Nillson approach [13, 14] , but also based on the nature of mathematics: analysis, synthesis and formalization of any field of knowledge. This method can be polymetric analysis. Polymetric analysis (PA) was created as alternative optimal concept to logical, formal and constructive conceptions of modern mathematics and theory of information [13, 14] . This concept is based on the idea of triple minimum: mathematical, methodological and concrete scientific. However, one of the main tasks of polymetric analysis is the problem of simplicity-complexity that arises when creating or solving a particular problem or science. It must be open system [13, 14] . In methodological sense, PA is the synthesis of Archimedes thesis: "Give me a fulcrum and I will move the world", and S. Beer idea about what complexity is a problem in cybernetics century, in one system. And as cybernetics is a synthetic science, the problem should be transferred and for all of modern science. Basic elements of this theory and their bonds with other science are represented in Fig. 4 [13, 14] . Fig. 4 . Schema of polymetric method and its place in modern science [13, 14] . The polymetric analysis may be represented as universal theory of synthesis in Descartian sense. For resolution of this problem we must select basic notions and concepts, which are corresponded to optimal basic three directions of Fig. 4 . The universal simple value is unit symbol, but this symbol must be connected with calculation. Therefore it must be number. For the compositions of these symbols (numbers) in one system we must use system control and operations (mathematical operations or transformations). After this procedure we received the proper measure, which is corresponding system of knowledge and science. Therefore the basic axiomatic of the polymetric analysis is was selected in the next form [3] . This form is corresponded to schema of Fig, 4 . Where  ij is Cronecker symbol. Remark 3. The indexes i,j, k,p are called the steps of the corresponding transformations. Only 15 minimal types of of generalizing mathematical transformations are existed [13, 14] . Basic elements of PA is the generalizing mathematical elements or its various presentationsinformative knots. Generalizing mathematical element is the composition of functional numbers (generalizing quadratic forms, including complex numbers and functions) and generalizing mathematical transformations, which are acted on these functional numbers in whole or its elements. Roughly speaking these elements are elements of functional matrixes. Polyfunctional matrix, which is constructed on elements (8) is called informative lattice. For this case generalizing mathematical element was called knot of informative lattice [3] . Informative lattice is basic set of theory of informative calculations. This theory was constructed analogously to the analytical mechanics. Basic elements of this theory are: 1. Informative computability C is number of possible mathematical operations, which are required for the resolution of proper problem. Basic principle of this theory is the principle of optimal informative calculations: any algebraic, including constructive, informative problem has optimal resolution for minimum informative computability C, technical informative computability C t or generalizing technical informative computability C to . The principle of optimal informative calculations is analogous to action and entropy (second law of thermodynamics) principles in physics. The principle of optimal informative calculation is more general than negentropic principle the theory of the information [28] and Shennon theorem [29] . This principle is law of the open systems or systems with variable hierarchy. The negenthropic principle and Shennon theorem are the principles of systems with constant hierarchy. Idea of this principle of optimal informative calculation may be explained on the basis de Broglie formula [39] For classification the computations on informative lattices hybrid theory of systems was created [13, 14] . This theory allow to analyze proper system with point of view of its complexity, The basic principles of hybrid theory of systems are next: 1) the criterion of reciprocity; 2) the criterion of simplicity. The criterion of reciprocity is the principle of the creation the corresponding mathematical constructive system (informative lattice). The criterion of simplicity is the principle the optimization of this creation. The basic axiomatic of hybrid theory of systems (HTS) is represented below. The system with nonconservation the principle of optimal informative calculation and with  t = 1 is called the semisimple system. The system with nonconservation the principle of optimal informative calculation only for ij N  and with  t = 1 is called the parametric semisimple system. 6. The system with nonconservation the principle of optimal informative calculation only for general mathematical transformations and with  t = 1 is called the functional semisimple system. 7. The system with nonconservation the principle of optimal informative calculation and with  t  1 is called complicated system. 8. The system with nonconservation the principle of optimal informative calculation only for ij N  is called parametric complicated system. 9. The system with nonconservation the principle of optimal informative calculation only for general mathematical transformations and with  t  1 is called functional complicated system. Only first six types of hybrid systems may be considered as mathematical, last four types are not mathematically. Therefore HTS may be describing all possible system of knowledge. Problem of verbal and nonverbal systems of knowledge is controlled with help of types the mathematical transformations and parameter connectedness [13, 14] . This theory has finite number of types the knowledge formalization systems. And in general, this theory is the theory of open systems with a changeable hierarchy. Therefore, HTS with its operational nature may be used for all knowledge and culture, including cybernetics and artificial intelligence. We can analyze PA and computer sciences with point of conditions, which are formulated for the general theories (theories of everything) [1, 3]: 1. It must be open theory or theory with variable hierarchy. 2. This theory must be having minimal number of principles. 3. It must based on nature of mathematics (analysis, synthesis and formalization all possible knowledge). 4 . We must create sign structure, which unite verbal and nonverbal knowledge (mathematical and other) in one system. 5 . We must have system, which is expert system of existing system of knowledge and may be use for the creation new systems of knowledge. 6 . Principle of continuity must be true for all science. These conditions must be used for the creation any dynamic science, which can be presented as open system. These conditions were formulated on the basis of polymetric analysis. But other theories of everything may be creating according to these six conditions. But main conditions for the computer systems must be: completeness, unambiguity, simplicity and possibilities to create corresponding system and measure and estimate proper set of information in this system []. But in real computer system we must include corresponding technologies, which which impose limitations on computing capabilities, especially performance. Further, we must transform all possible information in the form convenient for its processing by a computer processor, and then its adequate presentation for the user. But modern processors worked with various matrixes, therefore informative lattice of generalized constructive elements may be represented as mathematical generalization of computer processor. The one of central problem of modern computing sciences is problem of information complexity [1, 2] . This problem was formulated in cybernetics by S. Beer (S. Beer centurial problem in cybernetics). This formulation is next [2] : "Apparently, the complexity becomes the problem of the century, just as the ability to process natural materials has been a problem of life and death for our forefathers. Our tool must be computers, and their efficiency should be provided by science, able to handle large and complex systems of probabilistic nature. This science may be cybernetics -the science of management processes and communication. The basic thesis of cybernetics can be set forth as follows: there are natural laws behavior of the large multibonds systems of any character submits that -biological, technical, special and economic." In whole problem of complexity have two sides. First is pure computational. This problem is represented by two Smale's problems [14, 40, 41] , which are not only problems of modern computing science but it are problems of modern mathematics. The complexity of networks is represented as problem of complexity in modern physics [1] . But neuronets were introduced in cybernetics: M. Minsky as perceptrons [1] and A. Ivakhnenko [1] . Therefore this problem is cybernetic too. In this case we have third Kolmogorov algorithmic concept [1] in theory of information. But according F. H. George [5] cybernetics is the synthesis of many sciences: physics, mathematics, biology, psychology, linguistics and other. Therefore we must add three Kolmogorov concepts in information theory by fourth system concept. This concept is second side of problem complexity in modern science. It has more cybernetic and computing science nature as physical [1] . Problem of complexity is the basic of other computing sciences, including artificial intelligence, and as "sons" and "daughters" of cybernetics too [1] . Therefore system concept of complexity is more necessary for cybernetics and computing science as for physics because its sciences have more synthetic nature as physics [1] . Roughly speaking the ways of search the resolutions of this problem may be formulated with help phrase of Ukrainian philosopher G. Skovoroda: "Thank You God, He made all the essentials simple and understandable, and all the necessary things difficult and incomprehensible" [14] . Therefore methods of PA as universal theory of optimal formalized synthesis may be used for the resolutions the main cybernetic problems. Therefore the main problem of our paper is ascertainment of question about possible application of PA for the resolution the problems of cybernetics, including general problems (S. Beer centurial problem, problem of complexity) and particular (matrix calculations, arrays sorting, pattern recognition). Structure of PA may be represented as more deep formalization the neuronets too [1, 3] . According to F. George "The brain is universal computer" [4, 6] . Therefore PA as universal system formalization of knowledge may be used for the resolution the basic problems of natural and artificial intelligence too [1, 3] . Polymetric analysis fully satisfies these conditions, the represented cybernetics and artificial intelligence concepts and systems -partially. Polymetrical Analysis is more general system as cybernetics and artificial intelligence of all represented types. It is operational system, which is included the procedure of measurement with help generalizing mathematical transformations. Generalized constructive element (8) is term of polyfunctional matrix. But computer processors are using matrix calculation [1] . Therefore PA may be represented as functional expansion of computer processor, which are include the procedures of collection and processing of information using generalized mathematical transformations and criteria of reciprocity and simplicity. More concrete analysis of hybrid super intelligence of N. Moiseev type we are made [26] . Hybrid super intelligence of N. Moiseev type has more narrow scope. Therefore it is more anthropic system as PA [26] . Roughly speaking Legasov, Efremov principles are anthropic principles and only Moiseev principle is connected with procedure of system formalization. Legasov principle is ecological principle and has more long history. These are the problems of chemical, nuclear and atmospheric pollution of the environment. These problems were raised by physicists themselves such as A. Einstein, N. Bohr, A. Sakharov, philosophers -B. Russell and other (Russel-Einstein manifesto, Pugwash Conferences on Science and World Affairs and Greenpeece) [42] . The same political movement of the greens arose. Based on this, the international independent non-governmental environmental organization Greenpeace was established in 1971 in Canada [15] . The division of complexity into subject and mathematical is quite conventional. In polymetric analysis, it has a systematic form with an emphasis on calculation. The fact that the complexity of information processing needs to be carried out through calculations was pointed out by Kasti [32] . The Moiseev principle, the principle of correspondence between the complexity of the formalized (modeled) area and formalization (modeling) methods may be represented as equivalence three provisions of the principle of reciprocity, especially the first two provisions with the third. Really in computer science problem of complexity must reduce to problem of complexity the calculation in more general sense. For PA it must be more optimal principle with calculation point of view. Other chapters of computer sciences, including cybernetics, artificial intelligence, computer arithmetics may be expand, represent and explain with help Polymetric Analysis. So, theory of informative calculations and principle of optimal informative calculations allow resolving many computer problems of modern cybernetics in area of the obtained algorithms, matrix algebra and the problem of forming arrays. From a fundamental point of view, the principle of optimal informational calculations makes it possible to bring physical processes and information theory closer together. This is shown in the theory of information-physical structures. From a systemic point of view, polymetric analysis is a synthetic optimal extension of those concepts and methods that played a decisive role in the formation and development of both modern science and other areas of knowledge and culture. It can be used both to determine the complexity of computations (theory of information calculations) and to determine the complexity of systems and in the choice of both an expert system and a new promising system for the corresponding synthesis (theory of hybrid systems) [13, 14] . The way of the resolution the problem of complexity in computer sciences must be connected with problem of calculations [13, 14] ]. Roughly speaking modern computing science may be represented as renewal of Pythagorean system [14] . Three typed numbers (mathematical, sensitive and ideal) were in Plato's school [14] . With modern point of view mathematical numbers are corresponded to pure mathematics; sensitive numbers -applied mathematics; ideal numbers -other areas of knowledge [14] . According to this fact, we must include all three types of these numbers in one system, which must correspond to our six conditions for theories of everything (universal system of knowledge) [14] . But modern science is more science and knowledge is richer as in the time of the antiquity. But idea of triple minimum, which is basic for polymetric analysis, may be connected with Plato's numbers too. In modern computer science Godel's numbers (number has system nature) [4, 37] may be represented as renewal of Plato concept in modern science too [14] . Each universal system of knowledge, including computer sciences must include these aspects for the creation. In whole, the creation of effective computer science must be include general system principles, which are corresponded its universality and particular principles, models and theories, which are corresponded to modern level of information and optoelectronics and may be in nearest future biological technologies. In last case the idea of rapprochement of the living and inanimate worlds, including intelligence, can be more fully realized than at the same stage in the development of science and technology. To solve this problem, we must look for both new ways of both formalizing knowledge and their hardware representation and processing. Thus in this work we represented the main peculiarities of modern state computer sciences on the examples of cybernetics and artificial intelligence and shown the perspectives of creation universal theory of everything, which are based on modern computer sciences. 1. The problems of synthesis in computer sciences are analyzed. 2. Short analysis cybernetics as synthetic science is represented. 3. Artificial intelligence and synthetic aspects of its development are observed. 4. Basic concepts of Polymetric Analysis as universal system of formalization the knowledge are analyzed. 5. Questions of using Polymetric Analyses methods for the resolution some problems of computer sciences are discussed. 6. The six rules of creation universal theories, which are created on the basis of polymetric analysis, are represented. 7. Perspective of development the synthetic system methods in computer sciences is analyzed too. Beer centurial problem in cybernetics and methods of its resolution We and complexity of modern world Theories of Everythings: Past, Present, Future Philosophical Foundations of Cybernetics Foundations of cybernetics Cybernetics and Fundamental Science Development algorithms Studies of Universal Calculus / Gottfried Wilhelm Leubniz. Works in four volumes Introduction to mathematical philosophy Foundations of mathematics. Vyshcha shkola Automaton theories. Retrospective and perspective Polymetrical Analysis. History, Concepts, Applications Mathematical foundations of the knowledge. Polymetrical doctrine Science and the modern World. Pelican Mentor Books Cybernetics or control and the communication in the animal and machine. Sovetskoye Radio Man and the noosphere Introduction to metamathematics. Izd-vo inostr. lit.-ry Logical foundations of artificial intelligence Artificial Intelligence. A new synthesis The Quest for Artificial Intelligence: A History of Ideas and Achievements Cybernetics 2.0. Advances in Systems Science and Application Spiking Neural Networks and Brain-Inspired Artificial Intelligence System of Strong Hybrid Artificial Intelligence and Moiseev, Efremov and Legasov Principles Hybrid Super Intelligence and Polymetrical Analysis. 2021, 9p., Eprint Cornell university Super Intelligence to solve COVID Advances in Neural Computation Science and Information Theory A mathematical Theory of communication. The Bell System Technical Journal Programming Styles and Techniques. Internet University of Information Technologies Introduction to theoretical programming. -Moscow: Nauka Big systems. Connectivity, complexity and catastrophe Problems and methods of physical cybernetics. Institute of Mathematics of the National Academy of Sciences of Dictionary on cybernetics. The main edition of the M. P. Bazhan USE Information and self-organization On Formally Undecidable Propositions in Principia Mathematica and Related Systems I The art of computer programming Thermodynamics of isolated point (Hidden thermodynamics of particles) Computer science/ Wilkipedia -the free encyclopedia About science and civilizations. Memories, thoughts and reflections of a scientists