## 08 October 2020

### Is digital technology hurting our intelligence?

For this post I have adopted the title, slightly modified, from a debate between Katina Michael and Alex Halavais, both from Arizona State University. I approach the question in a different way by asking two questions whose answer is already presupposed when posing the question as to whether digital technology hurts our intelligence, namely, i) What is digital technology? and ii) What is our intelligence? In this post I will have to be brief and will take the first question first.

#### i) What is digital technology?

The blue-print for today's digital technology is to be found in Alan Turing's famous 1936 paper ''On Computable Numbers, with an Application to the Entscheidungsproblem'' in which he invents the Universal Turing Machine as the appropriate, elementary digital machine that is able to compute any computable number. The Entscheidungsproblem part of Turing's paper concerns the fact that he is able to prove that not all numbers are computable, and this is equivalent to saying that not every statement formulable within the language of given axiomatic mathematical entity is provable using the given axioms. This Universal Turing Machine still serves today at the core of the theory of digital computation simply because any conceivable computation on a digital device can be broken down theoretically into myriads, or even billions, of UTMs. In principle, a UTM can compute anything a supercomputer can compute.

What is a Universal Turing Machine? It is a machine that works through one digital, i.e. binary, number, the algorithm, step by step, that instructs the machine how to alter, i.e. compute, another binary number, the data, into a third binary number, the output. The output decides how a given practical situation is to be controlled, such as whether access to a certain site (in the physical world or the cyberworld) is to be granted or not, or which direction a missile in flight should take.

The algorithm — even it is a so-called 'deep-learning' algorithm as employed in Artificial Intelligence — has to be first written by a human programmer who has an understanding of some practical situation or other. The algorithm determines how the data fed in are to be computed to obtain a useful output. Everything depends upon how well or how badly the programmer conceives the practical situation, especially whether every eventuality in a 'live' practical situation has been taken into account and whether this understanding has been correctly coded into a computer program that is ultimately nothing other than a long binary number.

A corollary of the Entscheidungsproblem when transferred to practical situations whose control is entrusted to algorithms is that not every practical situation computes, that is, there are practical situations with which algorithms loaded into digital devices cannot cope. In other words, not all situations well understood by a human being are codable in a digital algorithm. This is not a practical, empirical limitation due to the complicatedness of practical situations, but a limitation in principle.

Digital technology is to be conceived as our intelligence in the guise of our understanding of practical situations concerning the control of movement and change that has been digitally encoded into algorithms and outsourced to digital devices designed to deal automatically with practical situations when fed with data. This is very convenient for us human beings, but it comes at the price of our i) no longer being able to understand how practical situations are algorithmically dealt with, ii) being exposed to the limitations of how a programmer understands and then encodes a practical situation, and iii) being subject to the inherent limitations in principle (i.e. not merely empirically) of how situations can be digitally encoded.

#### ii) What is our intelligence?

The framing of the question already presupposes that we can speak sensibly of "our" intelligence. How is this to be reconciled with today's scientific dogma, supported by modern subject-object ontology of individual consciousness encapsulated inside vis-à-vis an external world taken in by the senses? If, as today's neuroscience proclaims, human thinking is the correlate of neuronal activity in an individual's brain, how is it possible that we humans can share thoughts at all? Does neuroscience, along with the media and the rest of us, tacitly assume that the neuronal-thought activities in individual brains are in a kind of prestabilized harmony brought about by evolution of the human species? Such an assumption would be a variation of Leibniz's metaphysical principle of a prestabilized harmony between the individual monads (that have no window on the world) and the external world itself.

Although neuroscience would vigorously deny this dilemma, even if it were aware of it, it is the same metaphysical dilemma pertaining to all subject-object ontology unquestionably taken for granted by all modern science, whether natural or social.

If we can genuinely speak of "our intelligence", then we always already share it. Moreover, this sharing is one of the preconditions for our being able to speak sensibly of a 'we' at all. We all share a certain elementary understanding of the world, even though we may disagree intensely with one another over almost all issues. It is not hard to see, for instance, that we share an understanding of the elementary categories such as 'something' or 'other'. For you or I to see anything at all we must have already understood the universal category of 'something' itself as distinct from an individual something. Similarly, you and I can see and easily understand the category of 'other', for without it we would be unable to distinguish anything from anything else, i.e. to differentiate one something from another something. We take all this for granted without a second thought, but it is worth thinking on. Such categorial understanding is prior to any experience of the world; it is a priori.

Beyond these most elementary categories, we share in a given age such as our own further basal concepts for understanding the world at all, such as subject and object. In our own time, it is taken as self-evident that the subject is endowed with an interior consciousness vis-à-vis an external world full of objects. Without taking a reflective step back, we take this understanding of the world as populated by conscious subjects over against thingly objects as obvious, unquestionable, incontestable.

In the particular context of digital technology, we tend to pose the problems associated with the digital cyberworld in terms of how we subjects, who supposedly 'underlie' (from sub-ject, 'thrown under') all movements and changes in the world, can maintain control over such movements once we outsource our understanding of changeable situations to algorithms. Cast as such subjects of consciousness, we have an inherent hubris to assert and maintain control and therefore are fatefully inclined to regard the myriads of outsourced algorithms as servants. At most we try to set up ethical roadblocks against certain algorithms deemed to be dangerous. By virtue of their outsourced independence, however, the algorithms take on a life of their own and entangle us in the intricacy of their own uncontrollable and complicated interwoven interplay with each other.

We human beings, however, have historically not always been cast and understood ourselves as conscious subjects. In fact, in an earlier time, the age of the ancient Greeks with the inception of philosophy, the world was experienced and cast in an inverted way. What we today call objects were for the Greeks the subjects, i.e. the hypokeimena (literally: the underlying) that were addressed as such-and-such by human beings employing the λὀγος (logos). Logos can mean 'language', but it also means 'reason, understanding' and, as a distinguishing aspect of the human psyche (ψυχἠ), it is called νοῦς (nous).

The psyche, in turn, is the openness of human being for the world as a whole in its three-dimensional temporality of past, present and future which the nous within the psyche not only understands in some way, but also with which it resonates in moods of all kinds. It is only because we share this mooded resonance with three-dimensional time that we humans can share music.

By (falsely) promising us unlimited control of movement and change in the world, digital technology is befuddling our intelligence. The dream of total control through clever algorithms is the consummation of the Pythagorean belief that the world is ultimately number, a belief taken up by Plato and then later, in a more rabid form, by Galileo, Newton and Descartes who proceeded with the mathematization of the world with a vengeance. This historical cast of the world continues today in all branches of modern science.

It is not contemplated that there are kinds of movement in the world, those which I call interplay, that confound any attempt to control and master them, simply because they are our interplay of freedom with one another. Interplay demands mutual estimation and esteem. In this sense, with its overblown pretensions, digital technology is an insult to our intelligence.

There is much more to be said on this.

Further reading: Movement and Time in the Cyberworld,