A (perhaps non-accidental) failure to distinguish linguistically between reason (dianoia, ratio) and intelligence (noûs, intellectus) has given rise to the term “artificial intelligence” which consequently engenders misunderstanding if not indeed fear. In keeping with the original meaning of the words in question, the term “artificial intelligence” should de jure be denominated “artificial reason,” a correction which might help to resolve a great deal of confusion and perhaps even forestall disaster.
Reason versus Intelligence
Traditionally and historically, philosophy distinguishes categorically between reason and intelligence or intellect, reason being the faculty of reasoning or calculation (a liber rationis is an account book) and intelligence the faculty to understand that calculation or reasoning. It is thus the intelligence that knows, whereas the mind, properly so called, constitutes the “refractive medium” through which the acquisition of knowledge is attained. The mental faculty or mind functions as a mirror (a speculum in Latin), but it is the intellect that sees. Reason, on the other hand, in its empirical and logical manifestations, is concerned with what might be termed the realm of “bare facts,” whereas intellect is perceptive of meaning, of actual being. While the first world can be constructed, it is the second that can be understood — if indeed the act of understanding comes to pass: for it happens that the intellect, in its act of intellection, is perfectly free, and no authority, no will — even ours! — has any power over it: one cannot force oneself to understand what one doesn’t, as Simone Weil observed. “We absolutely cannot think what we can’t think” wrote G. E. Moore.
Intellect requires intelligibility even as the eye requires light, and intelligibility is the revelator of being. This means that intelligence is the “sense of being” even as the eye is the “sense of the seen.” “Exercising this faculty,” wrote Leibniz, “is named intellection, which constitutes a perception, distinct from but joined to the faculty of thought.”
In opposition to what can thus be termed intellectual intuition — which unites the knower to the known — discursive reasoning separates subject and object and breaks the object into its aspects and consequent relations. Reason as such is ordered or restricted to both the object it analyses and the logic which governs its functioning. On the one hand, these limitations make reason a fantastic tool appropriate to mankind, but on the other hand subject us to the limits of sense-experience and logic as such. The intellect is not thus limited, but is open to the supernatural and to paradoxical or seemingly contradictory realities. However — and this constitutes the paradoxical aspect of knowledge — while the intelligence grasps the reality of things, their intelligibility, this knowledge is no longer impersonal, as reason can be. As Aristotle has put it: “It is not the intellect that knows, but the man” (De Anima I, 408b 14-15).
Kant’s Subversion of Meaning
If, in philosophy, there is a “before and after Immanuel Kant” (1724-1804), this is because he has inverted the meaning of intelligence (Verstand) and reason (Vernunft) as understood by all preceding philosophers: from Plato, Aristotle, Plotinus and St. Augustine to St. Thomas Aquinas, Dante, Leibniz, Malebranche, and beyond, all said to labor under an illusion which he alone was able to recognize and dispel!
Indeed, in keeping with his conviction that intuition can only be sensible or empirical, he elevated reason to the highest rank among cognitive faculties, capable supposedly of rendering synthetic, systematic, universal and unified intelligibility. Hence intelligence or intellect came to be seen as inferior to reason: a secondary faculty concerned with processing abstractions, endowing sense experience with a conceptual form, and connecting the resultant concepts so as to constitute a coherent structure — until, finally, it turned into discursive knowledge, that is to say, became “reason.”
This is not the place to demonstrate the invalidity of this Kantian conception and recount the havoc it has wrought, especially perhaps in the Anglo-Saxon domain which, historically, is more inclined to empiricism, pragmatism, and logicism than some Continental schools, which seem to have survived the Kantian subversion somewhat more successfully.
From so-called “artificial intelligence” to actual artificial reason
It is by now evident that so-called “artificial intelligence” — thus designated by John McCarthy in the 1950’s — is in fact misnamed inasmuch as the designation, being far too broad, is consequently suggestive of inapplicable notions, such as the generation of consciousness, volitional autonomy and affective behavior.
Now, granting that AI draws upon interdisciplinary domains such as cognitive sciences, computational neurobiology, mathematical logic, artificial psychology, and the like, it yet pertains to computer science: to the world of programming and calculation, with enough speed to handle massive data, and sufficient sophistication to permit recursive self-improvement, at least in the form of a self-learning function. Recognizing faces or speech, winning strategic games, automatizing cars, simulating military operations, organizing complex data, and so forth: all this is purely a matter of programming, calculation and automated reasoning. However, when it comes to understanding human speech or interpreting complex data — as distinguished from recognizing human speech or organizing complex data — as is frequently claimed, one has evidently been misled by the word “intelligence” (the “I” in “AI”), which should de jure be replaced by an “R” for the term “reason.”
If now you reverse the question and wonder how to transform a living man into an automaton, you may think that nothing could be easier: you need but render him entirely submissive to all incoming determinations. He then turns into an automaton spirituale (Spinoza), as famously illustrated by the paradox referred to as the Donkey of Buridan: a donkey, namely, which is as thirsty as it is hungry, and is placed midway between a portion of oats and a bucket of water, which cannot reach a decision and dies. Here we have an example of what, in current parlance, could be termed an “automatic donkey.”
This thought experiment, moreover, shows that authentic freedom is not a perfectly balanced “in-between” (Leibniz), and proves by reductio ad absurdum that for man, being conditioned is not privation of freedom, but that, on the contrary, freedom is exercised despite determinations. A machine on the other hand — a robot, say, or automaton — will (like the Donkey of Buridan) “die” under any firm double bind, and moreover can never be “free” inasmuch as any random (re)action mimicking freedom will be due to one or more programmed algorithms. Like reason itself, the machine — as sophisticated as it can be — will be restricted in its capacity to its specific functions as specified by its inbuilt logic: it is indeed an embodiment of Artificial Reason, of AR as distinguished from AI.
The Danger of so-called Artificial Intelligence
Throughout history, mankind has progressively added to its powers of mechanical energy (fire, draught animals, steam, oil & gas, atomic energy); now, since August 7, 1944, with the entry into service of IBM’s Automatic Sequence Controlled Calculator (or Mark I), mankind disposes of additional mental energy.
It is true that technology can be detrimental to mankind if misused, and misusage can be due, variously, to either the user (gun shot, oil pollution, atomic bomb, ecologic destruction) or to an uneasily-controlled technology (atomic energy) or to a combination of both (a gun in the hand of a child). What applies to mechanical energy applies in the same way to mental energy (mass population surveillance and control, mass unemployment) — no less, but no more. What is currently noteworthy is that mental energy is reaching the level, potentially, of the most destructive mechanical energy (atomic bomb); therefore the physicist Stephen Hawking (and Bill Gates, and Elon Musk) have warned that “artificial intelligence could end mankind.”
In summary, the risk of the machine is in the risk of reason, and, crucially, the limits of the governing logic (illustrated by Asimov’s Three Laws and the numerous logic paradoxes), meaning that the main risk, if not the only one, is in fine the man himself and his limited reason.
Bruno Bérard (b. 1958) received his doctorate in ‘Religions and Thought-Systems’ from the École Pratique des Hautes Études, Paris. His area of emphasis is the study of metaphysics in liaison with the teaching of religions, from the perspective of the philosophy of knowledge.
His published works include: Introduction à une métaphysique des mystèreschrétiens (2005, with Imprimatur from the Catholic Church); La Révolution métaphysique (2006, a synthesis of Jean Borella’s work); Initiation à la métaphysique (2009); Métaphysique du paradoxe (forthcoming); and, in English, A Metaphysics of the Christian Mystery: Introduction to Jean Borella (Angelico Press, 2018). He has also served as editor of collective works: Qu’est-ce que la métaphysique? (2010); Métaphysique des contes de fées (2011); Métaphysique et psychanalyse (2013); and Physique et métaphysique/Physics and Metaphysics with Jean Borella and Wolfgang Smith (forthcoming in France and the United States).
As an editor at L’Harmattan, he was instrumental in publishing works by Robert Bolton, Jean Borella, Gilbert Durand, Frithjof Schuon, Wolfgang Smith and several other specialists. Dr. Bérard is also an executive manager for international aeronautic groups and gives pro bono assistance in business management to start-ups and small companies.