philosopher for hire. A practical philosophy blog.


Artificial Intelligence

The term 'artificial intelligence' belongs to the same class of concepts like, for example, 'people's democracy'. The adjective changes everything. Just as 'people's democracy' was essentially a totalitarian system, it was therefore on the opposite pole in relation to what constitutes the encyclopedic definition of democracy. Similarly, 'artificial intelligence' term is used for essentially automatic – programmed systems, and therefore close to concepts such as 'unreflective' or 'instinctive'. That is also the opposite of what we expect from the human intelligence. And this was the genesis of the concept of artificial intelligence (AI) and such expectations are still being formulated.

Let's start with the basics that are fundamental here. What is a computer and how does it work. As an example, we will use a toy for 4-year-olds. It is a cuboid with a (partly) transparent casing. It has 'drawers' on the sides and a hole for balls on the top. Depending on which drawers are pulled out and which are not, the ball (entered at the top) travels inside the toy in various ways, going out through one of the several holes located at the bottom. For a 4-year-old it's a great fun – watching changes in the course of the ball depending on the setting of the drawers (switches). For us, an ideal example on how the processor (computer) works. That is, in fact, how every CPU works. The processor is our cuboid, the balls are electrical impulses 'running into’ it through one pins, and leaving it through others. Quite like our balls – thrown in through one hole to fall out through another. The transistors which the processor is built of, serve as drawers (switches) that can be in or out (i.e., switched to different states), in order to change the course of the electrical impulse (our ball) inside the processor.

So the processor (as to the principle of operation) is nothing more but a simple toy for 4-year-olds. It is just that we throw in not one ball at a time, but several dozens and we repeat this action billions of times per second. And we have not four or six drawers but a few billions. Does anyone sane really believe, that if we put billions of balls into a plastic cuboid with billions of drawers, then at some moment in time this cuboid?, these balls?, one plus the other? or perhaps the mere movement of these balls will become consciousness? And it will want to watch the sunset or talk about the Shakespeare's poetry? If so, then self-consciousness should be expected from the planet Earth or the oceans.

Is even 100 trillions of plastic balls running through the most complicated paths in a huge plastic cuboid with trillions of movable drawers, whose positions change due to balls’ movement, able to cause a qualitative leap and result in the "digital singularity" described by wise professors as self-awareness? And this pomposity… We stand at the threshold of the 'Big Change', after which nothing will be the same, our world will change completely, and so on, and so on… In short, typical apocalyptic visions present in every era for centuries. Nihil novi sub sole.

I read about ideas like: “If we add up many specialized (intelligent) systems, we will get a ‘general intelligence’ as a result”. It is like saying: “If we add up many modern specialized garden tools, we will get a gardener as a result”. No, we won’t. You can’t add an electric hedge trimmer and a garden irrigation system. Just like you can’t add a quantitative – partial differential equations based system and an advanced search engine.

And let us not be confused by the wisely sounding words like 'quantum effects' or even 'quantum microprocessors'. It does not change the essence of things. Just as phosphorescent or faster than sound balls will not change the way of working of our toy. The funniest thing is, that this very idea was popularized by the famous SF movie of the 80's. Skynet from "The Terminator" is based on this concept – the belief that the quantity will turn into quality in a natural, spontaneous way. The same way of thinking, in the pre-electronic era, resulted in a belief that a thinking machine is just a matter of the sufficient number of gearwheels. In fact, we are not that far away from this thought – with our CPUs which work in the same way as a primitive toy for children.

It is even easier to see if we put it into the historical background. This kind of thinking repeats itself for centuries. Mary Shelley’s Frankenstein written at the dawn of electricity era is a good example of such thinking. A strong faith, that becoming gods able to create life, intelligence, new beings is at our fingertips. Each new revolution: mechanical, electrical, or contemporary IT one is assumed to let us pass this threshold. This is a very strong belief. A part of human nature. But should we use beliefs and a deep faith, where a logical, reasonable thinking is enough? Anyway, who wants to believe, may believe. This is the freedom principle of the free will. Something very hard to engraft into machines.

To make computer perform something (anything) we have to define it precisely, first. It has to be a strict, mathematical definition. Even more: it has to be numerical – that is representable as ones and zeroes, only. If we want a computer to classify facts for example, we have to enumerate all factors of such classification, weights given to each factor, define how to measure ‘a fact’ according to each factor, what ‘a fact’ is (precisely!); how, and through what means (enumerate again!) acquire such ‘facts’, and so on… Everything strict, precise – black and white. If we want grays, we have to define where they start, where they end, how to measure, how to distinguish, and so on. Everything needs a precise, strict definition in the computer world. This leads us to the unsolvable problem: how to precisely (numerically – using zeroes and ones, only) define intelligence, thinking, self-awareness? Is it feasible at all?

Let’s have a look at the software world, now. On the highest level, we can talk about all these fancy things like neural networks, heuristic algorithms, and similar subjects described in the wise books. At the lower level, we have programming languages. Software source code containing lines like:

for( i = 0; i < 5; i = i + 1 )

   array[i] = 7 + i;

We need to understand that every, I mean EVERY idea introduced in the computer world ends up being converted to such software source code. There is no exception. There cannot be. There is no other way of introducing anything into the CPU processing, as to write it down as a software source code, first. We can talk about high level programming languages, about state machines, about all the other modern IT concepts. But all these things have been written first using such source code. Or they are based on the software libraries, modules, building blocks written in the source code. We know no other way. If someone thinks otherwise, he or she simply does not know what happens in the background. What is behind all those sophisticated tools they use. There is always the source code. A set of very simple instructions, very primitive decisions like for example: if ‘a’ is greater than 10 do this, otherwise do something else. But millions of such simple instructions let us create very complex things. Things so complex and sophisticated, that for laics it looks like the computer “thinks” or “behaves” like a human being. It may seem so. But we have to keep in mind that all these awesome applications like chess playing, or vehicle driving, or the stock exchange data analyzing, they all are in fact just sets of billions of very simple instructions written in programming languages. We can have (multiple) level(s) of abstractions like graphical boxes, graphs, which allow us to visually design new algorithms without the need of writing the source code, or even knowing any programming language. It can go even higher, more abstract. But all these tools have been written in a programming language, first. And what’s more important: to be executed on a CPU, to result in any action on the computer’s side, all that we created with these high level tools have to be translated to the language ‘understandable’ by the machine – that is the computer.

So, this software developers created code (or tasks designed in the high level tools) has to be translated to machine code. Which again, in human readable form known as the assembler contains lines like:

move AX, BX;

add AX, 7;

Easy to see, the instructions are getting more and more primitive. More and more simple. There is no space for the high level concepts. There are only very simple logical and mathematical instructions. But ‘logical’ here does not mean what Sherlock Holmes defined as ‘logical’. ‘Logic’ here means binary operations. Changing zeros to ones or vice versa. And the ‘mathematical’ means adding, subtracting, multiplying numbers. No ‘Einstein level’ mathematical concept is present here.

But to be accepted by the CPU, instructions need to be translated (simplified) even more – to machine instructions, that is chains of zeros and ones. One could say: ‘plastic ball’ and ‘no plastic ball’. At the lowest level, all these breath-taking inventions of the contemporary computer geniuses end up as long chains of zeroes and ones. ‘Electric impulse on a pin’, or ‘no electric impulse on a pin’. In our example: ‘plastic ball on entrance’ or ‘no plastic ball on entrance’. This is it. Either a ball or an electric impulse. Little difference in fact. Unless someone believes in magical properties of an electric impulse. Like Mary Shelley in her “Frankenstein”.

Of course, it would be hard to replicate contemporary processors as a plastic toy. But it could be possible to replicate a real 8-bit processor for example. Built with 3500 or 6000 transistors. Or even the famous predecessor of the x86 architecture (8086) with its 29000 transistors. It would be an enormous ‘toy’, but it could (if properly designed) replicate fully such a basic processor. If we take into account, that the principle of operation is the same in case of an old 8 bit machine and modern 64-256 bit processors, then it becomes apparent that our toy and a processor can be exchangeable (as for the rule).

There are many specialized machines that sew, cut, wrap and perform many other specialized tasks much better and faster than humans. Since this industrial – machines based revolution – is more than 100 years old, no-one today is afraid of people being made obsolete, or even replaced, by machines that can do everything faster and better than humans do, not only cut, sew, wrap, etc. But that very fear was present in the second half of XIX century and even at the beginning of the twentieth century. Today, we have machines (yes! machines) which perform specialized tasks like operating drones, playing chess or other games, placing bets on stock markets, etc. And many people think, that humankind will soon become obsolete. That these machines (computers) will replace humans. The answer is: try to use a chess-playing machine to operate a drone, or even play a starcraft computer game, or vice-versa. Try to use it to any other task, than the one it was designed for. Of course, it may beat many human chess-players. Just like an industrial cutting machine will beat a dozen of humans with scissors.

Chess playing program may seem 'intelligent' in its behavior, just because it is very complex. Its complexity is enormous – unbelievable to laics. The full source code – instructions set for this software – consists of millions, or even tens of millions lines of code. If printed in standard books, it would take many library shelves. But it is only able to play chess. Even less. It requires data to be entered in a predefined, specific way. You cannot “ask” it to play using a typical wooden chessboard. It is more like a very large mechanism. You need to “press” specific “buttons” (input data in a specific form) to make it work. And it works according to its design. Like all mechanisms – machines do. I know, that many people are impressed by these chess playing programs. “I saw it working” – they say – “It plays in a completely different way than humans! It is a real AI”. Well, it is hard to discuss with such laics. They are like people, who seeing first industrial robotic arms working in car factories, were certain that they will soon see robots walking, jumping, running just like humans do. “Look how fast and precise this arm is! Much faster and precise than our arms! Now humanoid robots are inevitable!”. In fact, creating a humanoid robot, which can walk, run, jump like a human would be much easier than creating a self-conscious, intelligent computer – an AI. Yet, even this much easier task is still beyond our reach.

Why is it easier? Because it requires only mechanisms. Precise, properly working mechanisms. Nothing more. In case of an AI, we have to expect a qualitative leap – we have to expect the machine to start ‘functioning’ on a totally different level. How? Somehow. Anyhow. Who cares? When we feed the faith of millions. The faith that we are just one little step from creating an AI. A human-like AI. And soon after that – the over-human AI. Who cares about the details (the reality), when he or she can write essays, articles, books about all these fantastic possibilities, which are just around the corner. Not to mention, that it sells better than the plain, non-fantastic reality.

Anyway, the question behind the AI is:

“Do we believe, that we could make a plastic toy for 4-years olds a thinking being?”

All that I wrote above has its profound consequences. It is not only a theoretical question. Our limitations in creating an AI influence us in our everyday life. For example, these limitations make truly autonomous cars impossible. As I show in my next post.

Actually, this topic could end here. However, for those who are interested, I will present a coarse and synthetic description of what is currently known under the name of 'artificial intelligence'. As far as I am concerned, contemporary attempts to create 'intelligent' solutions cover the space from neural networks to algorithmic (expert) systems.

Systems based on algorithms are just automatons. They operate in a way, which is strictly planned and predicted by its creators (omitting mistakes of course). Even if a random element can be found in them, it is still in the scale and scope strictly designed by programmers. By definition, they are not able to go beyond the stage of a very complex, but still a mechanism – an automaton.

As for neural networks, their applications are extremely difficult and, apart from a few cases (OCR - recognition of letters / signs, etc.), bring rather poor results. The reason is in the fact, that no one is able to answer the basic question of how the human brain works. That is, how does it happen, that from the tangle of neurons transmitting to each other signals a rational, continually existing consciousness is born. In my humble opinion, the material brain is not able to create a rational, free-will – (self-)consciousness. But this is a topic for a different, more metaphysical article. Anyway, we know the micro scale pretty well. I mean, a single neuron. And rather poorly macro scale, that is the entire brain. This knowledge comes from facts like that if we damage this area of the brain, the victim loses these functions. If we damage another – he or she loses a different skill. A similar range of 'knowledge' is provided by studies on the degree of stimulation of brain areas in the course of solving specific tasks. Thanks to this research, we know (more or less) what and where happens in the brain. But the fundamental question of HOW, still cannot be answered. In addition to the micro and macro scopes, the way our brain works is an area both unknown and incomprehensible. The only answers we have here are that it is complicated, extremely complex, that we need more research, etc., etc.

So, how is this 'digital ego’ going to be created, if we have no idea how our own mind works? How does it happen that quite different human brains provide a wide range of very similar basic abilities. From our experiences with neural networks we know, they are very sensitive to sometimes even small changes to the number of neurons and/or their organization: connections, layers, etc. Even a very small change results in large differences – usually degradation – in the neural network behavior.



Comments:




enter the number:
Add comment

philosopher4hire on Twitter

If you like this content and you would like to see more,
then support me in my efforts, please!