The web and its sorceries C U R M U D G E O N C O R N E R The web and its sorceries Giuseppe Longo1 Published online: 15 October 2016 � Springer-Verlag London 2016 Computer networks provide us with enriched possibilities of encounter with those who are distant and those who are different. They grant us access to the entire human knowledge base. The trade of culture, ideas, and things has been at the heart of the highest moments of our history, from Greece to the Italian Renaissance, hubs of highly intense exchanges across the Mediterranean and further. We could do a whole lot more given a lot more speed. Yet, computer networks can also function as an ‘‘average field,’’ in the sense used in physics: When there are too many neighbors, one can no longer find singularities, each of us is grayed out, and average behavior is forced onto everyone. The modes of life are completely changed, and the collective imagination is constantly under the pressure of supposed machines that, since the 1970s, have been pro- mised to replace mankind in all respects. Just like the android in Blade Runner (1982) who develops a relation- ship with Harrison Ford; the android (or gynecoid?) is entirely indistinguishable from a woman in 2019—who in fact is a beautiful actress. Since those decades, banks and post offices have invested in sparing their employees from the onerous work of reading checks and sorting mail, but progress has been extremely modest. Instead, every day we hear: Beware! Accept any working condition and violation of your rights because otherwise, very soon, you will be completely replaced by machines—a fabrication targeting the collective imagination. This replacement of worker by machine has already been happening for decades, thanks to numerically controlled machines in the assembly lines, where the repetitive iteration of gesture is precisely the function of the digital machine. This fabrication also haunts a collective imagination that is adapted to subor- dination to rules, to mechanical evaluation, to governance rather than government. Our replacement will be ever possible, especially if we format human behavior based on the machine: the longer we live in video games, iterating identical actions, the more we’ll be replaceable by machines made to iterate. Continued innovation? Sure, we’re surrounded, inun- dated by thousands of new gadgets, but the techno-scien- tific substrate has been the same for the past 20 years. One of the scientific ideas contributing to this technological avalanche is the discovery of giant magnetoresistance in the 80s by Albert Fert, University of Paris XI, and Peter Grünberg in Jülich, Germany, for which they received the Nobel prize in Physics in 2007. Instantly, companies with their own research departments, especially in the USA, understood the practical significance of the discovery, developed it, and so gave birth to that digital memory which to this day has been doubling in capacity about every two years. Thanks to increasing memory, more data, and more programs are implemented into smaller devices, and a splendid environment of software ‘‘craftsmanship’’ has taken place. But the real scientific discovery, Fert–Grün- berg, is about 30 years old, enriched by original variants of 20 years old programming methods, such as object ori- ented programming, occasionally on top of stochastic methods. If you read news from the late 90s, you will find announcements of projects very similar to the Google Car… What happened to the Google Glasses from 2 years ago? Phantasmagorical promises with some fallback onto & Giuseppe Longo giuseppe.Longo@ens.fr; http://www.di.ens.fr/users/longo 1 CNRS and ENS, 29 rue d’Ulm, 75005 Paris, France 123 AI & Soc (2017) 32:135–136 DOI 10.1007/s00146-016-0678-z http://crossmark.crossref.org/dialog/?doi=10.1007/s00146-016-0678-z&domain=pdf http://crossmark.crossref.org/dialog/?doi=10.1007/s00146-016-0678-z&domain=pdf gadgets that make our car use more or less comfortable. Analogously, flocks of brilliant programmers and players of chess (IBM 1997) and Go (Google 2015) have put into an expanding memory decades of matches played by humans. Very well designed algorithms randomly produce millions of Go strategies per second via the Monte Carlo method (1950), and statistical learning algorithms store the most effective moves based on the context of each game. Thus, those poor chess and Go champions have faced off against storms of human adversaries and decades of strategies, memorized by identically iterating machines, save for the random generation of strategies, and against memorization algorithms of comparative statistics (Deep Learning, 3D neural nets), which was an unquestionable advancement in the art of programming. Obviously, this has nothing to do with the figurative vision of the game, the human ‘‘seeing’’ of configuration dynamics, broadly qual- itative, and an all-human organization of the otherwise pointless combinatorics of the game. But perhaps the greatest irony of this entirely con- structed, assumed, and menacing humiliation of mankind, this publicity scoop for those who believe in it, is the latest fashion of a a-scientific Data Mining in Big Data—some call it ‘‘agnostic science’’, science with no knowledge. This should predict and guide actions within any dynamic of life, without the need for hypotheses, theories, under- standing, knowledge (Anderson 2008). Big Data and the techniques of statistical analysis provide a huge opportu- nity, if these are leveraged to produce hypotheses, evaluate theories, and offer new ones. Instead, following a trend that has spread virally, these techniques are said to optimize— i.e., minimize the need for—thought, reducing it to zero: When databases are large enough, ‘‘Correlation supersedes causation, and science can advance even without coherent models, unified theories’’. The bigger the database, by yottabytes upon yottabytes, the less it is necessary to think: Machines will discovery regularities that science will not, to a degree that is sufficient for prediction and action—‘‘we kill people based on metadata,’’ declared the ex-director of the CIA, M. Hayden, in a recent debate. Fortunately, mathematics allows us to prove the absurdity of such claims. C. Calude, a mathematician from the University of Auckland (NZ), and I proved their absurdity in an article that is simple but built on classic non-trivial results—‘‘The Deluge of Spurious Correlations in Big Data.’’ (2016) In brief, for any given ‘‘regular correspondence between numbers,’’ there exists an integer size N, such that every set with N elements contains that correspondence. Thus, authors of algorithms who deny thought, who intentionally ignore algorithm theory, ergodic theory, and numerical combinatorics, which we invoke in the article, come up against the intrinsic limits that these theories are able to prove: Randomness inevitably seeps into large sets of numbers, making prediction impossible, unless one estab- lishes a prior rule that prescribes what matters and what could be useful for prediction. In other words, any suffi- ciently large dataset contains arbitrary—hence spurious— correlations. So much for Big Data without science. The power of scientific knowledge is also that of being able to prove the inherent limits of each theory, of explaining the perspective that allows the carrying out of science: whoever says that they can understand or accomplish everything using just one tool or concept, like DNA, or algorithms… will surely be wrong. Curmudgeon Corner Curmudgeon Corner is a short opinionated column on trends in technology, arts, science and society, commenting on issues of concern to the research community and wider society. Whilst the drive for super-human intelligence promotes potential benefits to wider society, it also raises deep concerns of existential risk, thereby highlighting the need for an ongoing con- versation between technology and society. At the core of Curmudgeon concern is the question: What is it to be human in the age of the AI machine? -Editor. Acknowledgments Extracted from Bartolini (2016), interview to the author. Translated by Gabriele Carotti-Sha. References Anderson C (2008) The end of theory: the data deluge makes the scientific method obsolete. Wired. https://www.wired.com/2008/ 06/pb-theory/. 27 June 2008 Bartolini P (2016) Complessità, scienza e democrazia (Complexity, science and democracy), interview with Giuseppe Longo. Globalist. http://megachip.globalist.it/Detail_News_Display?ID =125846&typeb=0&complessita-scienza-e-democrazia. 5 Oct 2016 Calude CS, Longo G (2016) The deluge of spurious correlations in big data. Found Sci. doi:10.1007/s10699-016-9489-4 136 AI & Soc (2017) 32:135–136 123 https://www.wired.com/2008/06/pb-theory/ https://www.wired.com/2008/06/pb-theory/ http://megachip.globalist.it/Detail_News_Display%3fID%3d125846%26typeb%3d0%26complessita-scienza-e-democrazia http://megachip.globalist.it/Detail_News_Display%3fID%3d125846%26typeb%3d0%26complessita-scienza-e-democrazia http://dx.doi.org/10.1007/s10699-016-9489-4 The web and its sorceries Acknowledgments References