Clever Machines as the engine that enables schooling to learning

Clever Machines as the engine that enables schooling to learning
Computers and robots with their artificial intelligence (AI) won’t be replacing human teachers. The sigh of relief in the wake of this statement should be used to ask a follow-up question: If not our digital friends, who will help solve the problems of quality of teaching and teacher shortage? These topics were discussed in the “Clever Machines as the Engine that Enables Schooling to Learning” session at SF4.
Professor Rose Luckin (Learning-focused design, UCL Knowledge Lab) opened with a story: “Somebody called Joe, who has spent a few years living in the city of Nanjing, the silk capital of China, was really struggling to learn Mandarin and he can’t communicate with his neighbors which is very upsetting because he is a social kind of guy. His neighbor Wu decides to help him, and introduces him to Prof. Chan from Nanjing University, where he has been studying intelligence and language learning and has developed a special language learning room. So Joe enters the language learning room, which is sealed; there are no windows, but there is a slot on one side of the room, and a slot on the other side. Joe sits in this room; there’s a big rule book in front of him. He receives through the left-hand slot Chinese symbols, looks in the rule book and converts each Chinese symbol into an English symbol and passes them to the other side. Joe is in this room for a month. After a month, Prof. Chan opens the door, welcomes him out and rewards him with a beautiful book written in Mandarin, and waits for Joe to read. Joe can’t read a word, because of course Joe doesn’t understand Chinese; he can obey the rules and convert the characters, but he doesn’t understand Chinese.”
One doesn’t need to be, or possess, an AI in order to understand that the referent of the parable is the subject of the discussion – “AI doesn’t understand what it computes, either. It doesn’t matter whether it is a rule-based system, a statistics-based system or a deep neural network. The AI has no self knowledge, no mental cognitive awareness, it does not understand.” Thus, predicts Luckin, teachers’ jobs are not at risk at the hands of AI: “If we take into account that AI does not understand what it knows, and has no mental cognitive abilities – humans do, a very important thing to remember – then where on earth is there a possibility of an AI replacing a teacher? Can you imagine an AI trying to explain to a parent why it has given a particular grade, trying to justify exactly what is happening, let alone having the capacity to understand the emotions and the motivations and everything being experienced by the learners, as well as having a perception of their wellbeing? So it’s not a case of AI replacing teachers.”
What AI can do, according to Luckin, is help teachers identify systemic problems: “One of the keys to being a good teacher is understanding that all learners are different, they all see things differently, want to learn in different ways and need different sorts of support. What we need to do is understand the learners. AI cannot understand itself, but maybe it can help us to understand the learner. AI, in combination with Big Data and smart visualization techniques, can help us to take on some of the biggest challenges in education – challenges such as the inequality in many education systems in which the most privileged can afford tutors and the best schools, and therefore can pass the exams, and therefore get on in society. The achievement gaps between those who can shine in the way that we choose to assess them, and those who don’t get a chance to show us the brilliance that they have inside them – that’s where AI can help.”
She broke down this idea into practical examples: “Part of the reason that the system is unfair is because we only assess students on a small portion of what they need to learn and understand. We tend to assess them through exams and tests mainly on their knowledge of particular subjects, and that doesn’t suit everyone. If we use the kinds of data that we can now collect – we can collect social networking data, TV surveillance data, data about how students have used their identity card to go into the library or buy lunch or buy books at the bookshop – it is the data that we can collect when they interact with technology, but it’s not just that data. If we use AI smartly to process by identifying the key questions that we need to answer through that data – because we know what we’re looking for, in terms of what we believe signifies learning and progress – then we can start really to unlock the black box of learning; we can start to shine a light on knowledge whose importance those who support standardized tests want us to be able to acknowledge – we can look at whether they are good at math or history. But we could also learn how over time learners deal with challenges – whether they are resilient, whether they can learn from the things that challenge them. We can use the visualization of this data to help students understand themselves, help them discuss with their teachers and parents where they’re doing well and where they need help, and most importantly, we can help learners become effective learners. Because it’s that self-learning ability that will be a key skill in the workforce of the future – not the routine cognitive skills and the knowledge we seem obsessed with testing at the moment. It can help us to detect the skills and abilities of many students, instead of only the privileged. It can help us to identify and value a wide range of skills, abilities and characteristics.”
Professor David Weinberger, writer and senior researcher at Harvard University’s Berkman Klein Center, explained the difficulty people have in understanding how AI works. He demonstrated this with AlphaGo, an AI developed by Google DeepMind, which defeats people in the elaborate game of Go. If we ask AlphaGo to explain a particular move, Weinberger said, it will tell us that we will start with node 1 – a possible move – examine the probabilities that result from that move, then continue to node 2, examine the probabilities and so on to billions of nodes. “We can’t do that, it’s meaningless to us,” said Prof. Weinberger. “The only way we could figure out what that means for making the next move is to feed it back into a computer and have the computer do the calculations – which is where we started. This is an alien way of thinking. How do we know that it’s thinking right? In the case of AlphaGo it’s easy – it keeps beating us. It wins! This sort of thinking is immensely powerful; it lets us address problems we simply couldn’t before. Many people are nervous about it, which I understand, because we have systems that are going to be guiding our autonomous cars, and they’re going to make decisions like, ‘OK, we’re gonna have to slam your car into the side of the road, because that’s the best outcome,’ and you say ‘Why?’ and it says, ‘Well, let’s start with node 1.’ And it’s troubling that we have machines that are making moral decisions, or decisions that have moral consequences, in ways that we cannot question or interrogate.”
“We are developing deep learning systems,” said Dr. David Konopnicki (Master Inventor at Information Retrieval Group, IBM Research). “It is very difficult to take what the system knows about somebody – which is really a mathematical model – and then come back to the user and say, ‘OK, this is what we know about you. Do you agree? Or do you want to change it?'”
“For the past few hundreds of years, we’ve accepted a Newtonian model which says that there are universal laws, and they’re very simple, simple enough that human beings can understand them. And what a surprise that the laws of the universe turn out to be entirely knowable by humans, which is a tremendous coincidence,” said Professor Weinberger. “Our expectations have been shaped by our technology, and our technology, until recently, was epitomized by computers that were incredibly slow and could handle very little data, at one point embedded on punch cards,” he said. “But now everything is a sensor – the things you carry in your pocket are sensors emitting tons of data. Sensors across the world, orbiting the earth, enormous amounts of data, enormous networks, enormous, high-speed computers are joined together in distributed computing networks which we could not imagine a couple of years ago. So if we want to ask about the alien intelligence, I think that if we balance the Newtonian view, that there are universal, simple laws that are knowable by humans, against the view we are now encountering in our deep learning machines, then we conclude that there is way too much data for a human brain and that relationships are way too complex. Now we can deal with this data in real time, deal with far more relationships and also deal with the fact that in some practical sense, if you look at anything closely enough – everything is an exception. So if you want to know where the alien intelligence is, I would suggest that, at least for today, the human intelligence is more alien than the computer one. At this point, deep learning gives us a more accurate representation of how the world works. We have shaved off the details in the pursuit of universality and laws that we can apply top down.”
This of course has implications for the way we learn, and Prof. Weinberger wasn’t just talking about the aforementioned radical changes in perception: “What does this mean for education? I have no idea. Maybe it means, and this is long term, that there won’t be quite so much emphasis on students showing their work, because we’re going to get more and more used to the idea that knowledge is something that we co-create with computers. We’re already pretty well used to that – in truth, we’ve been co-creating with instruments, with things in the world, since the first time a shepherd carved a notch on a stick to count the sheep. Second, we have often tried to teach by providing universal theories and high-level abstractions and let students apply them, as if that’s where the truth is. And of course there is truth in those abstractions, but they also tend to file away, sand down, the particulars and the exceptions.”
A practical utilization of these ideas can be seen in the programming world, Weinberger suggested: “If you want to learn a programming language, you can take a course online or in a classroom, get the general principles and work your way down, and that will work for a lot of people. But far more likely these days is that you’ll go online, start a tutorial, get one chapter in, you’ll start doing some work, you’ll hit a ‘how do I do this or that?’ and you’ll go out and ask any of the really excellent sites like stackoverflow, addressing very particular problems as they arise. And you can become a very good computer programmer that way.”
A less abstract example can be found in the words of Prof. Jihad El-Sana (Dept of Computer Science Ben-Gurion University), who tinkers with augmented reality (AR) in his lab and presented a possible application: “If Avi [Warshavsky, MindCET CEO] is stuck with his car in the middle of the desert and would like to fix it – can we provide him with an interface so that he will take his phone, point to the engine of the car, and the computer will tell him, ‘OK, unscrew these screws, take off this hood, do all these actions until you find the problem and fix it; we’ll tell you step by step how to put it back.’ This is one of the things we’re trying to develop, and it’s mainly not for Avi in the desert – imagine an astronaut in the space station who needs to fix something. Instead of resorting to heavy manuals, he will have it in his helmet.”
Whether people of the future will need a YouTube video, an AR helmet or a direct transmission to their brain to tell them how to turn the screw or which line of code to debug so that the water generator in their hovercraft comes back to life – assuming it cannot self-fix – schools will have to learn how to teach us ways of learning new skills, and to know which of the existing ones not to discard, so that they don’t end up dehydrating just because their AR helmet’s battery got drained.