Artificial Intelligence, Self-Image, and the Mythology of the Future of Education

Artificial Intelligence, Self-Image, and the Mythology of the Future of Education
Avi Warshavsky, CEO of MindCET EdTech Innovation Center
When the Fish Decides to Roll Over
There is a well-known Talmudic tale that tells of a group of sailors who landed on a lonely island in the middle of the sea, lit a fire, and sat down to eat. A short time passed, and the island rolled over; it turned out that they had not landed on an island, but rather on the back of a giant fish, on which sand had accumulated and vegetation had grown. The public’s interest with artificial intelligence is somewhat reminiscent of this picture. We talk at length about artificial intelligence and how it will influence reality – some of us with a messianic gleam in our eyes, and others with a look of terror. But artificial intelligence has long since arrived, and all of us are riding on its back (so to speak), without discerning its enormous influence on our lives. Google, Facebook and IBM are, in effect, giant artificial intelligence factories, creating new knowledge on the basis of our vast use of the various services that they offer us. When we come to examine the influence of artificial intelligence on various areas of life, the major challenge is not only to imagine how the techniques of the future will look, but also to identify techniques that have all too fast become obvious to us. We very quickly become used to solutions that, in a more planned and systematic state of affairs, would have raised numerous question marks. If someone, twenty years ago, would have suggested that we give up the human skills of navigation, and instead rely blindly on software, such a suggestion would have generated debate. But in fact, from the moment that solutions such as Waze reached a critical mass of users, they swept all of us (apart from a handful of stubborn individuals) along on an enormous wave, without any reflection. The real debate does not begin, and cannot really begin, unless the fish decides to roll over.
[maxbutton id=”1″ ]
When we talk of the encounter between information technologies and education in general, and the integration of artificial intelligence in education in particular, our tendency is to react, rather than taking a strategic view. We rush to ask questions about the way in which artificial intelligence can offer better tools for learning, and about the dangers that it brings with it, or about the implications of artificial intelligence for the labor market and the optimal way to prepare for it. All of these questions are important and productive, but artificial intelligence demands that we think on a more strategic level, where we outline our vision for our schools, and answer the big questions that direct it.
The Hidden Myths
The word “myth” carries with it a lot of erroneous baggage. We may tend to see in a myth an undeveloped, perhaps even primitive, way of explaining reality. In popular terms, “myth” is often use synonymously with fantasy, whose key characteristic is its underlying lack of truth. A series of leading 20th century thinkers, ranging from Ernst Cassirer to Roland Barthes, taught us to relate more seriously to myths. According to Cassirer, the myth is another type of glasses through which we view reality, just like science, art, or religion. Through these glasses, so Cassirer claims, we reflect that which we cannot express by other means. Neil Postman devoted a significant portion of his book, The End of Education, to the important role of myths in the education system. Postman preferred not to use the word “myth” in this context, because of its problematic connotations; he spoke instead of narratives, or of “gods” that lead the education systems. Postman demonstrates how, in the Christian Middle Ages, it was the religious-ecclesiastic narrative that was the constitutive story of the education system, while the Enlightenment brought with it a scientific narrative, and from there on to the next narratives/gods – technology and the consumer culture. These major narratives, these myths, play an essential role in education systems, and answer the big question of “To what end?” – why are we in school, why is our school built the way it is, and what kind of world is it trying to prepare us for? Thus, for example, the function of the school, according to the Thomas Jefferson narrative, is to ensure that the citizenry should know when and how to defend their freedom, while the Protestant ethic, for example, wants the school to teach us that we need to stick to hard work and develop our ability to delay gratification. We are also familiar with “smaller” narratives, such as that in which we learn arithmetic so as not to be cheated at the grocery store, or Talmud so as to sharpen our minds. In the complex, multicultural reality of the present day, there is no single founding myth for our education systems, other than what is commonly referred to as “popular education,” nourished by an eclectic fabric of beliefs, which may not even be consistent, were we to apply a sterile, academic analysis to them. In spite of their lack of coherence, and in spite of the fact that these myths are often not formulated expressly, they play a key role in directing the world of education. Artificial intelligence may have an enormous influence on a broad spectrum of educational narratives and myths, and in this article we will focus on one important, central myth – the myth of explanation. In order to understand how artificial intelligence influences the myth of explanation, we should recall the dramatic moments when an artificial intelligence machine bested the most gifted of human players.
Machine Beats Man: The Myth that was Written Backwards
A chess player sits before a chessboard, beads of perspiration dotting his brow and running down his neck; he takes out a handkerchief to wipe them away. The audience around is spellbound, anxiously following every move. The surprising aspect of this picture is actually the other side of the table – the chair opposite the player is empty; he is playing against a machine which, were it not for the tendency toward the dramatic, might have been represented by an almost invisible box. Our player is still struggling along, but the audience already knows that his loss is a foregone conclusion. This picture is familiar to us from the loss by the Korean Go champion, Lee Sedol, to Google’s AlphaGo in 2016, and the loss by the world’s leading chess master, Gary Kasparov, to IBM’s Deep Blue in 1997. The picture is, in fact, a lot older. The Turk was a mechanical chess player that played in Europe in the 18th and 19th centuries, defeating celebrated chess players and notable statesmen such as Benjamin Franklin and Napoleon Bonaparte. The Turk, however, was a fraud – secreted within the machine was a diminutive man who made the actual moves, while the audience thought it was the machine. However, the story of the Turk is not merely an amusing old-time tale of deception. The story shows that the image of a man-made machine, whose performance exceeds that of a human being, predates the situation in which such a capability exists, and demonstrates just how interested we are in such a story. After all, we didn’t just get up one morning and discover, to our surprise, a technology that was “smarter” than us. We had looked forward to that moment, we dreamed of it, and we advanced toward it with our eyes wide open. One of the fundamental papers in computer science was written by Claude Shannon in 1950, and it dealt with the possibility of a chess game between a man and a machine. Shannon wrote his paper in the years when computers were taking their first steps, in the same year that Alan Turing formulated what would later be known as the Turing test, and at a time when the computing power of the enormous computers that existed in those days was smaller than that of the most negligible of apps on the phone in our pocket. However, this did not stop Shannon from being sufficiently visionary to be fascinated by the idea of a competition between man and machine. Such a competition was mythical, and we are used to myths that hark back to the past. However, myth of artificial intelligence is one that was written backwards – it was a myth of the future. And like every myth, it involved drama, and was the story of a struggle. It is not for nothing that the picture that immediately comes to mind when we hear the words “artificial intelligence” is that of a chess match between man and machine, which is the image of competition; it is not for nothing that we also seek a deterministic aspect in this myth, which usually ends with man’s loss, as in the ancient depictions of Greek tragedy. But this is a good point to stop and ask: Why did we so much want the machines to defeat us? From where does this deep sentiment, that creates this myth, come? What part of our humanity does this myth perpetuate? In order to touch on this question, we should look at later manifestations of this myth – the moments in which this victory actually took place.
From Brute-Force Artificial Intelligence to Intuition
We will not go into a complicated discussion of the exact definition of intelligence in general, and that of intelligence in the context of artificial intelligence. For the purpose of discussion, we will make use of a narrow, somewhat imprecise but reasonably practical definition, under which intelligence is the ability to perform tasks. In this sense a cat has a higher intelligence than a spider, a chimpanzee has a higher intelligence than a cat, and man has a higher intelligence than a chimpanzee. The good news about artificial intelligence-based programs is that they have a higher intelligence than man. It is important to note that, based on the definition that we have adopted, we are talking about the measurement of task performance – a program that is capable of defeating a man at chess is better than a human being in performing this task, but this tells us nothing about other mental qualities that it may or may not have. For the purposes of this discussion, it may be able to defeat the world champion at chess, yet still be as a sensitive as a block of wood, or profound as a bowl of whipped cream.
In his article, Shannon attempted to characterize, on the theoretical level, the path that would be followed, in the future, in chess games between man and machine. Shannon distinguished between two types of possible victory by the computer over the human being:
Type A, also known as brute-force artificial intelligence, is based on an algorithm that traverses all the possible states of a chess game, and tries all of them, until victory is achieved. A chess game has about 300 billion possibilities in only the first four moves, most of them not particularly successful. In other words, a computer that wins using a type A strategy will have to consider an enormous number of states within a very short time.
Type B, which is more sophisticated, is able to focus on smaller number of successful moves, and achieve victory through them. This ability might be referred to, by a somewhat inaccurate analogy, as intuition.
Machine Beats Man: A Play in Two Acts
Type B play requires much greater sophistication, yet Shannon assumed that victory over a human chess player would take place specifically through artificial intelligence of type B, rather than type A, mainly because type A requires enormous computing power, the likes of which did not seem, during the 1950s, to be achievable in the foreseeable future.
Shannon erred in his prediction – in 1997, almost fifty years following the writing of the article, the world’s champion chess player, Gary Kasparov, was beaten by IBM’s Deep Blue. Deep Blue’s victory was a type A victory – the computer was sufficiently fast and powerful to preview all possible moves, and to choose the most appropriate one. It certainly wasn’t the most sophisticated program; or, as Kasparov himself put it, it was intelligent in the same way that an alarm clock set to ring at a particular time is intelligent.
Almost twenty years later, however, a type B victory was also achieved. Google’s AlphaGo defeated Lee Sedol, world champion Go player.
Go is a traditional Chinese game with an enormous number of possible moves, more than the number of atoms in the whole of the universe, and this was a much greater challenge than chess.
For AlphaGo to be able to play this complex game, it learned from about 160,000 games, and practiced more than three million board positions, many more than a human being could dream of grasping. But AlphaGo was not the final stage in the story. The next version, Alpha Zero, taught itself to play Go without its learning being based on anyone teaching it. Alpha Zero took three days to learn the game, following which it won 100 out of 100 games.
Who Understands Artificial Intelligence?
The victories by AlphaGo and Alpha Zero reveal an impressively broad range of aspects of machine learning and artificial intelligence, but the most astonishing phenomenon in these victories lies, as shown by internet thinker David Weinberger, in our inability to explain how they won. We know how machine learning operates, but we are unable to rationalize or recreate the specific learning process. AlphaGo offered moves that no human player had ever made. For every task, up to the age of artificial intelligence, we have been able to define certain regularities, which allow us to create a technology that addressed the task. In many instances the technology was more effective than us; often it was fearsome in its power, but we always understood the logic and the consistency underlying its actions. Till now, technology served to amplify the human body, but the rules and the models that were the basis of its activity, and its logic, were totally human. The age of the smart machines places us, for the first time, opposite effective machines who logic we do not understand.
The Myth of Explanation
The victories by AlphaGo and Alpha Zero might serve to challenge one of the more fundamental myths of the world of education and learning – the myth of explanation.
Kurt Vonnegut expressed this myth in lyrical terms in his book, Cat’s Cradle:
“Tiger got to hunt,
Bird got to fly;
Man got to sit and wonder, ‘Why, why, why?’
“Tiger got to sleep,
Bird got to land;
Man got to tell himself he understand.”
Vonnegut’s words should be read very carefully, from beginning to end:
Just as it is the nature of the tiger to hunt, and the bird to fly, so it is man’s nature to sit and ask “Why?” – to seek explanations for the reality that surrounds him. The act of seeking explanations disturbs our rest, while finding an explanation is a source of calm. Just as the tiger sleeps and the bird lands, so too man finds rest when he tells himself that he has understood. This doesn’t mean that he has actually found the ultimate explanation, only that he has reached a subjective state in which feels he has understood. This idea is inherent in the Hebrew language – we seek an explanation that מניח את הדעת (settles our mind), a place in which our restless consciousness can rest. That place is the explanation.
Explanations generally show how a specific occurrence is subject to general rules. Explanations have mechanisms for justification, reasoning, proof, and theories that support them. We would like to see education systems, among others, as a place for explanations – a space which teaches us to seek explanations, presents us with convincing rationales that are comprehensible to us, and primarily delineates boundaries to the question of what is a satisfactory explanation. As with the giant fish in the Talmudic tale, the myth of the explanation is so deeply entrenched within our culture, until we barely notice it. The entry of artificial intelligence into our world is one of those moments in which the fish rolls over, and we see in a new light that which we had erroneously seen, till that moment, as stable land.
Artificial Intelligence and Self-Image
Artificial intelligence’s challenge to the institution of explanation is, first and foremost, a challenge to our self-image. As Weinberger shows, since the time of Plato, and especially since the Age of Enlightenment, our ability to perform tasks and achieve goals went hand in hand with our understanding. We always had the ability, potentially at least, to understand what works, what is effective. Artificial intelligence challenges the boundaries of our understanding, in an inescapable way. One may compare this shakeup to two earlier scientific revolutions: the Copernican revolution, and the Darwinian revolution. Copernicus taught us that the planet on which we live is not the center of the universe, and that, like other planets, it circles the sun. This is not just an important astronomic discovery, but also – and foremost – a revolutionary humanistic discovery, since it removes man from the center of the universe to one of its back rows. It is a discovery that had a dramatic influence on our collective self-image. In a similar way, the processes that Darwin uncovered, and the broad context that his theory offered for the development of life, are not just theories in biology, but also theories that reposition the human species among the rest of the creatures, all of us – according to Darwin – having developed through the identical principle, from simpler to more complex life forms. Darwin’s Man is not the pinnacle of creation, but rather another link in an impressive, yet blind, symphony of natural selection. In both instances, the strong vocal opposition to these theories was often not based on purely scientific grounds, but on the shake-up and the attack on our self-image as human beings.
Artificial Intelligence as a Copernican Revolution
One might shake one’s head with patronizing scorn at the primitive nature of these objections, but this would overlook the pain that they reflect – a pain caused by the fundamental change in our place in the world. The difficulty in accepting these theories may be compared to the difficulty experienced by a baby, as it grows, in recognizing the existence of other people, separate entities in the world around him. It is not surprising that we still find an astonishing percentage of graduates of Western education systems who believe that the sun revolves around the earth, or bitter objections to the teaching of evolution in schools in the western worlds, almost two hundred years after Darwin. It is almost certain that many of those who, consciously or unconsciously, object to these theories, understand that they are not just professional, scientific theories. A scientific theory such as evolution is accompanied by a whole fabric of ideological teachings which, for example, may generate ecological sensitivity on the one hand, or cruelly deterministic approaches on the other. Artificial intelligence is no different in these senses. Copernicus taught us to be humble in the face of the universe, Darwin taught us to be humble in the face of other living creatures, and artificial intelligence teaches us a lesson in humility in the face of the devices that we ourselves construct. These understandings of the human species require that we reformulate the fundamental myths of our schools. It sounds like a somewhat distant, philosophical mission, of little urgency, but the fact that we are neglecting the great narratives of education in favor of pinpoint responses, may incur for us a heavy price. The growing alienation lies less in our ability to amuse and arouse curiosity in the students, and more in our difficulty to provide convincing answers to questions of “For what purpose,” and to adapt our pedagogic strategy and the major narratives to the world in which they operate. Students who will live in a world in which most of their tasks are performed by smart machines, can no longer rely on educational narratives that are full of holes, whose key aspects have not been tested since the Age of Enlightenment. This opportunity, to become partners in composing our world’s new educational myths, is given into the hands of all those involved in the educational enterprise – parents, teachers, and policy makers alike.
[maxbutton id=”1″ ]