philosopher for hire. A practical philosophy blog.


Artificial Intelligence

The term “artificial intelligence” belongs to the same class of concepts like “people’s democracy.” The adjective changes everything. Just as “people’s democracy” was essentially a totalitarian system, it was therefore on the opposite pole in relation to what constitutes the encyclopedic definition of democracy. Similarly, the term “artificial intelligence” is used for essentially automatic, programmed systems, and therefore it is closer to concepts such as “unreflective” or “instinctive.” That is also the opposite of what we expect from human intelligence. However, human intelligence was the root of the concept of AI, and expectations to achieve a human-like AI are still being formulated.

Let’s start with the basics that are fundamental here. What is a computer and how does it work? As an example, we will use a toy for 4-year-olds. It is a cuboid with a (partly) transparent casing. It has “drawers” on the sides and a hole for balls on the top. Depending on which drawers are pulled out and which are not, the ball (entered at the top) travels inside the toy in various ways, going out through one of the several holes located at the bottom. For a 4-year-old it’s great fun – watching changes in the course of the ball depending on the setting of the drawers (switches). For us, it is an ideal example of how the processor (computer) works. That is, in fact, how every CPU works. The processor is our cuboid, the balls are electrical impulses “running into” it through some of the pins, and leaving it through others. It is quite like our balls – thrown in through one hole to fall out through another. The transistors, of which the processor is built, serve as drawers (switches) that can be in or out (i.e., switched to different states), in order to change the course of the electrical impulse (our ball) inside the processor.

So the processor (as to the principle of operation) is nothing more than a simple toy for 4-year-olds. It is just that we throw in not one ball at a time, but several dozens; and we repeat this action billions of times per second. And we have not four or six drawers but a few billions. Does anyone sane really believe, that if we put billions of balls into a plastic cuboid with billions of drawers, then at some moment in time this cuboid?, these balls?, one plus the other?, or perhaps the mere movement of these balls, will become consciousness? And it will want to watch the sunset or talk about Shakespeare's poetry? If so, then self-consciousness should be expected from the planet Earth or its oceans.

Is even 100 trillions of plastic balls running through the most complicated paths in a huge plastic cuboid with trillions of movable drawers, whose positions change due to the balls’ movements, able to cause a qualitative leap and result in the “digital singularity” described by wise professors as self-awareness? And this pomposity... We stand at the threshold of the “Big Change,” after which nothing will be the same, our world will change completely, and so on, and so on – in short, typical apocalyptic visions present in every era for centuries. Nihil novi sub sole.

I read about ideas like “If we add up many specialized (intelligent) systems, we will get a ‘general intelligence’ as a result.” It is like saying, “If we add up many modern specialized garden tools, we will get a gardener as a result.” No, we won’t. You can’t add an electric hedge trimmer and a garden irrigation system. Just like you can’t add a quantitative – partial differential equations-based system and an advanced search engine.

And let us not be confused by wise-sounding words like “quantum effects” or even “quantum microprocessors.” It does not change the essence of things. Just as phosphorescent or faster-than-sound balls will not change the way our toy works. The funniest thing is that this very idea was popularized by a famous sci-fi movie of the 80’s. Skynet from “The Terminator” is based on this concept – the belief that the quantity will turn into quality in a natural, spontaneous way. The same way of thinking, in the pre-electronic era, resulted in a belief that a thinking machine is just a matter of the sufficient number of gearwheels. In fact, we are not that far away from this thought – with our CPUs which work in the same way as a primitive toy for children.

It is even easier to see if we put it into the historical background. This kind of thinking repeats itself for centuries. Mary Shelley’s “Frankenstein” written at the dawn of the electric era is a good example: a strong faith that becoming gods able to create life, intelligence, and new beings is at our fingertips. Each new revolution – mechanical, electrical, or contemporary IT – is assumed to propel us across this threshold. This is a very strong belief, a part of human nature. But should we use beliefs and deep faith where logical, reasonable thinking is enough? Anyway, whoever wants to believe, may believe. This is the principle of free will – something very hard to engraft into machines.

To make computers perform something (anything) we have to first define it precisely. It has to be a strict, mathematical definition. Even more, it has to be numerical – that is, representable as ones and zeroes, only. If we want a computer to classify facts, for example, we have to enumerate all factors of such classification, specify the weights given to each factor, define how to measure ‘a fact’ according to each factor, describe what ‘a fact’ is (precisely!), outline how and through what means (enumerate again!) such ‘facts’ are acquired, and so on. Everything is strict, precise – black and white. If we want grays, we have to define where they start, where they end, how to measure, how to distinguish, and so on. Everything needs a precise, strict definition in the computer world. This leads us to the unsolvable problem: how to precisely (numerically, using only zeroes and ones) define intelligence, thinking, and self-awareness? Is it feasible at all?

Let’s have a look at the software world, now. On the highest level, we can talk about all these fancy things like neural networks, heuristic algorithms, and similar subjects described in wise books. At the lower level, we have programming languages - software source code containing lines like:

for( i = 0; i < 5; i = i + 1 )

   array[i] = 7 + i;

We need to understand that every, I mean EVERY idea introduced in the computer world ends up being converted to such software source code. There is no exception. There cannot be. There is no other way of introducing anything into the CPU processing, than to write it down as software source code first. We can talk about high-level programming languages, about state machines, and about all the other modern IT concepts. But all these things have been written first using such source code. Or they are based on the software libraries, modules, and building blocks written in the source code. We know no other way. If someone thinks otherwise, he or she simply does not know what happens in the background - what is behind all those sophisticated tools they use. There is always the source code. A set of very simple instructions and very primitive decisions like “if ‘a’ is greater than 10 do this, otherwise do something else.”

But millions of such simple instructions let us create very complex things – things so complex and sophisticated, that for non-specialists it looks like the computer “thinks” or “behaves” like a human being. It may seem so. But we have to keep in mind that all these awesome applications like chess playing, or vehicle driving, or the stock exchange data analyzing, are in fact just sets of billions of very simple instructions written in programming languages. We can have (multiple) level(s) of abstractions like graphical boxes or graphs, which allow us to visually design new algorithms without the need of writing the source code or even knowing any programming language. It can go even higher, more abstract. But all these tools have been written in a programming language first. And more importantly, to be executed on a CPU, to result in any action on the computer’s side, all that we created with these high-level tools has to be translated to the language “understandable” by the machine, that is, the computer.

So, this code created by software developers (or tasks designed in the high-level tools) has to be translated to machine code. Which again, in human readable form known as the assembler contains lines like:

move AX, BX;

add AX, 7;

It is easy to see that the instructions are getting more and more primitive. More and more simple. There is no space for the high-level concepts. There are only very simple logical and mathematical instructions. But “logical” here does not mean what Sherlock Holmes defined as “logical.” “Logic” here means binary operations. Changing zeros to ones or vice versa. And the “mathematical” means adding, subtracting, and multiplying numbers. No “Einstein-level” mathematical concept is present here.

But to be accepted by the CPU, instructions need to be translated (simplified) even more - to machine instructions, that is, chains of zeros and ones. One could say: “plastic ball” and “no plastic ball.” At the lowest level, all these breathtaking inventions of the contemporary computer geniuses end up as long chains of zeroes and ones. “Electric impulse on a pin,” or “no electric impulse on a pin.” In our example: “plastic ball on entrance” or “no plastic ball on entrance.” This is it. Either a ball or an electric impulse. There is little difference, in fact, unless someone believes in the magical properties of an electric impulse – like Mary Shelley in her “Frankenstein”.

Of course, it would be hard to replicate contemporary processors as a plastic toy. But it could be possible to replicate a real 8-bit processor, for example. Built with 3,500 or 6,000 transistors. Or even the famous predecessor of the x86 architecture (8086) with its 29,000 transistors. It would be an enormous “toy”, but it could (if properly designed) replicate fully such a basic processor. If we take into account that the principle of operation is the same in case of an old 8-bit machine and modern 64–256-bit processors, then it becomes apparent that our toy and a processor can be interchangeable (as for the rule).

There are many specialized machines that sew, cut, wrap, and perform many other specialized tasks much better and faster than humans. Since this industrial, machine-based revolution is more than 100 years old, no one today is afraid of people being made obsolete, or even replaced, by machines that can do everything faster and better than humans do, not only cut, sew, wrap, etc. But that very fear was present in the second half of the 19th century and even at the beginning of the 20th century. Today, we have machines (yes! machines) which perform specialized tasks like operating drones, playing chess or other games, placing bets on stock markets, etc. And many people think that humankind will soon become obsolete, that these machines (computers) will replace humans. The answer is: try to use a chess-playing machine to operate a drone, or even play a StarCraft computer game, or vice-versa. Try to use it for any other task than the one it was designed for. Of course, it may beat many human chess players. Just like an industrial cutting machine will beat a dozen humans with scissors.

The chess-playing program may seem “intelligent” in its behavior, just because it is very complex. Its complexity is enormous – unbelievable to non-specialists. The full source code consists of millions or even tens of millions lines. If printed in standard books, it would take many library shelves. But it is only able to play chess. Even less. It requires data to be entered in a predefined, specific way. You cannot “ask” it to play using a typical wooden chessboard. It is more like a very large mechanism. You need to “press” specific “buttons” (input data in a specific form) to make it work. And it works according to its design –like all mechanisms (machines) do. I know that many people are impressed by these chess-playing programs. “I saw it working” – they say – “It plays in a completely different way than humans! It is a real AI”. Well, it is hard to discuss with such non-specialists. They are like people who, seeing first industrial robotic arms working in car factories, were certain that they will soon see robots walking, jumping, and running just like humans do. “Look how fast and precise this arm is! Much faster and precise than our arms! Now humanoid robots are inevitable!” In fact, creating a humanoid robot, which can walk, run, and jump like a human would be much easier than creating a self-conscious, intelligent computer – an AI. Yet even this much easier task is still beyond our reach.

Why is it easier? Because it requires only mechanisms. Precise, properly working mechanisms. Nothing more. In case of an AI, we have to expect a qualitative leap – we have to expect the machine to start ‘functioning’ on a totally different level. How? Somehow. Anyhow. Who cares, when we feed the faith of millions? The faith that we are just one little step from creating an AI. A human-like AI. And soon after that, the over-human AI. Who cares about the details (the reality), when he or she can write essays, articles, and books about all these fantastic possibilities that are just around the corner. It sells better than the plain, non-fantastic reality.

Anyway, the question behind the AI is:

“Do we believe that we could make a plastic toy for 4-year-olds a thinking being?”

All that I wrote above has its profound consequences. It is not only a theoretical question. Our limitations in creating an AI influence us in our everyday life. For example, these limitations make truly autonomous cars impossible, as I show in my next post.

Actually, this topic could end here. However, for those who are interested, I will present a coarse and synthetic description of what is currently known under the name of "artificial intelligence". As far as I am concerned, contemporary attempts to create "intelligent" solutions cover the space from neural networks to algorithmic (expert) systems.

Systems based on algorithms are just automatons. They operate in a way that is strictly planned and predicted by their creators (omitting mistakes of course). Even if a random element can be found in them, it is still in the scale and scope strictly designed by programmers. By definition, they are not able to go beyond the stage of a very complex, but still a mechanism – an automaton.

As for neural networks, their applications are extremely difficult and, apart from a few cases (OCR - recognition of letters / signs, etc.), bring rather poor results. The reason is in the fact that no one is able to answer the basic question of how the human brain works. That is, how does it happen that, from the tangle of neurons transmitting signals to each other, a rational, continually-existing consciousness is born? In my humble opinion, the material brain is not able to create a rational, free-will (self-)consciousness. But this is a topic for a different, more metaphysical article.

Anyway, we know pretty well the micro scale – I mean a single neuron. And we know rather poorly the macro scale – that is, the entire brain. This knowledge comes from learning that if a certain area of the brain is damaged, the victim loses certain functions. If another area is damaged, he or she loses different functions. A similar range of “knowledge” is provided by studies on the degree of stimulation of brain areas in the course of solving specific tasks. Thanks to this research, we know (more or less) what and where things happen in the brain. But the fundamental question of HOW still cannot be answered. In addition to the micro and macro scopes, the way our brains work is an area both unknown and incomprehensible. The only answers we have are that it is complicated, extremely complex, that we need more research, etc., etc.

So, how is this “digital ego” going to be created, if we have no idea how our own mind works? How does it happen that quite different human brains provide a wide range of very similar basic abilities. From our experiences with neural networks, we know they are very sensitive to even small changes in the number of neurons and/or their organization: connections, layers, etc. Even a very small change results in large differences – usually degradation – in the neural network’s behavior.



Comments:




enter the number:
Add comment

philosopher4hire on Twitter

If you like this content and you would like to see more,
then support me in my efforts, please!