Back

Q&A With Dr. Ben Lorica - A.I as a Catastrophic Threat or a Tool for Growth

Q&A With Dr. Ben Lorica – A.I as a Catastrophic Threat or a Tool for Growth

Q&A With Dr. Ben Lorica - A.I as a Catastrophic Threat or a Tool for Growth

Q & A with Ben Lorica, Chief Data Scientist at O’Reilly Media, Program Director of Strata Data and Artificial Intelligence Conferences.

Q: With all the notions about AI that are going around – of a catastrophic threat that will create displacement and overthrow humans and, on the other hand, that it is an enabler and an enhancement – which would you choose in talking about AI and humans today?

B.L.: “I think at this point, one can think of AI as an assistant to humans. For one, many of the things we read about AI tend to be about systems that are limited to one task; for instance, pattern recognition, or perception – detecting whether or not this moving object is a pedestrian, or a bicycle, or a car, if you’re in a self-driving car. And I think over time, as the systems become imbued with knowledge and reasoning capability, they can do more. But already, actually, even those types of technologies can help to partially automate many routine tasks and workflows. So I would say, at this point, the best metaphor is an assistant, and that assistant will become smarter over time. One example is chatbots. Last year, or maybe two years ago, there was much hype about chatbots, and it turned out they were easy to build, but also they were very limited in what they could do. A lot of them were rule based, based on state machines. So if the user starts interacting with a chatbot, and goes off in a direction that the chatbot isn’t expecting, the chatbot gets stuck. As the underlying building block technologies for chatbots get better – that would be in natural language understanding – one can expect some of these solutions to get better. But I think, for the foreseeable future, a lot of them will be much more focused, domain specific; they will not be offering general intelligence, but will be an assistant for a very specific role inside a company or an organization. And then things get better over time as the building block technologies for that solution get better.”

Q: Let’s talk about personal assistants. How do you see the impact of these, especially in education? What would be the effect you foresee of using personal assistants? Is it a way of outsourcing human relations to more automated ones?

B.L.: “You can imagine them getting better over time, and expanding in scope over time. So, just as the self-driving car industry has settled on five levels of autonomy, with level five being the true self-driving car, I think these chatbots will have different levels too. Maybe at the most routine level it just notifies the student, ‘Hey, your homework is due. Hey, you should read this for tomorrow.’ And then maybe at the second level, it will start answering routine questions – if you’re familiar with the acronym FAQ, the frequently asked questions in tech support. And then it gets better and better over time every day, and can start handling contextual questions – given that there are multiple possibilities to answer or frame a problem, which one would you choose? And then over time, it gets even more personalized to you as the student. I think of it as an evolution of the technology. I like the notion that as the technology gets better, it can do more, but it will always kind of evolve in terms of capability. We can’t get around the fact that these technologies rely on basic building blocks. So in the example of the chatbot, that would be natural language understanding.”

Q: AlphaGo, two years ago, really surprised us with the capacity of an intelligent system to take its own decisions, to think coherently, or to use common sense. How do you see the implications of such developments? Which areas of our lives do you think they will affect the most?

 

B.L.: “AlphaGo is an impressive achievement, but it’s limited to a game with very well-defined rules. There’s a lot of computation that they had to use in order to get to that level. So, the question is, what other tasks do we have where we can afford to throw that much computation to automate something, and secondly, where the rules of the game are so well defined? I think maybe there are certain tasks inside a company – the phrase people use is enterprise workflow automation – there might be a series of tasks that are somewhat repeatable, confined, and well defined, and with enough simulations and examples, you can automate them. The question at the end of the day is: One, do you have enough data? Two, do you have the scale to justify automation? Because if you don’t have the scale, if you only have to do something a few times a week, then there’s no point automating it. But if you have the scale, you have to start looking at the problem, and you see if it fits into the framework of the technologies we have today. So AlphaGo, for one, relied on a mix of underlying technologies that may or may not apply to the problem that you have.”

Q: So in a way, even very intelligent systems do have limitations, or still depend on the capacity that we have in feeding the system. So would you say there are core human learning aspects that a machine would not be able to develop?

B.L.: “I think right now, the systems we have rely on a lot more data than humans, so you and I can look at one or two examples of something and internalize that pattern. We also rely on prior knowledge and domain knowledge. So we know that we know when we enter a situation; we know certain laws of physics, we know that something can’t just disappear, right? So I think that right now, we are in a situation where our systems are good if we have a lot of data and a lot of compute. One interesting example is language and natural language. Deep learning is a great approach, and it has proven to be very successful in computer vision and speech recognition. It has had some success in natural language, although it hasn’t led to natural language understanding. If you talk to the people who work in computational linguistics, they’re all using deep learning, but they also feel that deep learning is producing models that are not the most efficient. Because linguists come with a lot of prior knowledge from linguistics, they want models that are much more efficient, require less data, take advantage of linguistic rules and patterns, and things like that. I am hoping that we’ll come up with hybrid solutions where deep learning is one part of the answer, but there are other techniques that take advantage of prior knowledge and similar things.”

Q: Experience, you would say.

B.L.: “Yeah, domain expertise. Understand some prior structure.”

Q: In an article you talk about the promise and pitfalls of AI and deep learning –  what would you say is the main pitfall today?

B.L.: “I think right now, one is that it requires a lot of data. Two, it’s a bit of a black box. Some of the more famous talks towards the end of last year were around people being frustrated with understanding how deep learning works. Let’s say you’re a deep learning expert in a company and someone joins the team, and they don’t have enough understanding to have a way to pass on a lot of the knowledge, other than you need to get your hands dirty and try things at this point. I think that’s getting better. One of the good signs over the last year is that the people who work in theoretical computer science have gone into machine learning and are trying to understand how machine learning works, and when it fails and when it excels”.

Q: When people talk about the use of AI in education, the main focus is adaptive and personalized learning. And as you’ve mentioned about other areas, the frustration here is even more critical, especially in adaptive learning. With the current trend of technology personalizing everything, is there a realistic hope of AI being used in education? What are the implications of that? There is a lot of value in being in a class with a group of people, there’s a lot of learning. Do you think we will adapt and learn how to deal with negative and positive effects, or do you see a danger there?

B.L.: “I don’t know if I would say that it’s either-or. I think it will definitely be an important part of the picture. I think part of it is just a recognition that people learn differently, so there should be some amount of personalization involved. People learn in different ways at different speeds, and respond to different styles of teaching, so I think that from that perspective, personalization should help education. As to your other question, there’s more to education than just rote learning; there’s interaction with your peers, developing social skills, developing emotional intelligence. So I think that there’s room for both, depending on how an institution actually deploys these technologies. If they deploy it in a way where you don’t have to come to school anymore, you all just stay at home, then I think maybe that might be going too far, particularly for people in a certain age bracket. I think though, arguably, that if you’re already an adult learner, you’re already working and you want to enhance your skills, and you want to take courses or get credentialed, and you want to do it in the comfort of your home, technology can help. But I don’t know if we’re at that point where you can just turn it over to an AI; you still need instruction. As we mentioned before, these personal assistants are not there yet. An important part of education, people forget, is networking. So even for adult learners, you can imagine learning the material using this great AI system at home that can help you learn at your own pace, using the style you like. But one of the things you want to be able to get out of education is to meet people who share your interests and can become part of your professional network over time.”

Q: One last question, related to a new buzz, which is the ethical implications of AI. There is a huge trend to use AI across industries and society. For example, using AI to help judges take decisions. Would you prefer, if you could choose, an emotional human decision with all of the dangers that ensue, or to pass this over to an objective machine? And should we be concerned about this trend?

B.L.: “I think I would respond by not responding. Again, it’s not an either-or situation. On the one hand, I think the black box AI system is unemotional and can take things so-called objectively, but time and again, there are examples of AI systems that have exhibited bias or haven’t been exactly fair. And actually, this is an area of personal interest to me; I’ve been doing a lot of reading and studying in this area. The machine learning community has over time established certain metrics or statistical tools for ensuring that an AI system is fair. But each of those metrics has problems. So there are exceptions to each of those metrics. A classic example is what they call NP classification, which doesn’t use variables that are protected, such as gender, age, and race. But then people point out that maybe there are some situations where you’ll have to use those variables, because the distribution of women might be different than the distribution of men. So if you apply the same rule, which is ‘if above a certain level, we decided this way,’ but the two populations have different distributions, you might be actually inadvertently penalizing the women. If you look at each of these statistical metrics, there are exceptions. The main takeaway is that we’re still at the point where the machine learning community is developing these tools. And the main thing that I tell people is that if you’re serious about ethics and fairness, then there’s no substitute for having to get in there. You can’t rely on a statistical metric and statistical procedures to make your ethical dilemma go away. Because for one, there are also a lot of papers coming out now which say that even though you have a statistical procedure that you deem ethical, there might be impacts that are delayed over time that make it less ethical. In other words, humans are in the loop, humans are still involved. You will have to set up processes in place where you take the best of the statistical advice and procedure for creating ethical AI, but make sure you have teams of data scientists who can audit and make sure that the AI is behaving accordingly. And actually, one of the things that I’ve come around to is this notion of risk management, in general, for machine learning and AI. Now that we’re deploying many of these systems in mission critical, real world applications, there are many considerations beyond statistical machine learning and business metrics: fairness, ethics, privacy, security. All of these come with risks. Just as we want software and financial services that are risk free, we want AI that’s also risk free, so we need to start thinking in terms of risk management for AI, which might mean, in this particular case of privacy and ethics, having in the team that builds your AI system a team of data scientists, on one hand, and then an independent team of data scientists who serve as validators, so after you build the model this team that wasn’t involved in the model building process will independently validate your AI system to make sure that it is fair and unbiased.”