Back

Explainable, Trustworthy AI for education

Explainable, Trustworthy AI for education

Explainable, Trustworthy AI for education

A conversation with Prof. Cristina Conati

Cristina Conati is a Professor of Computer Science at the University of British Columbia, and specializes in Artificial Intelligence as it relates to interactive systems. Her educational research captures a broad array of user data to create systems that elicit receptive cognitive processes and emotional states among users in order to motivate optimal learning.

“I love my teacher! She answers my questions even before I ask them!”

While that level of sensitivity may be a high bar for human teachers, it’s within reach for algorithms. Computer learning systems that capture a learner’s communicated and uncommunicated cues are increasingly able to fulfill their needs and desires in a manner that matches their understanding and ability.

“My research is multidisciplinary, at the intersection of AI, human-computer interaction and cognitive science,” says Christina. “The system builds a user model that captures a variety of user traits or states. This includes goals, domain expertise preferences, and even more esoteric things such as emotions, metacognitive abilities and cognitive load.”

 “Trust in AI is a topic of great concern. There are many initiatives about how to create AI systems that enable their users to understand that what the AI proposes is valuable and should be followed.”

Much of the research is devoted to data collection of uncommunicated needs. An example would be the use of signal from physiological sensors to deliver user affect information based on how the user looks at the interface. A related example would be gaze and eye tracking.

 “There’s a saying that the eyes are the windows to the mind, and in fact we’ve been able to use gaze data, as well as eye tracking data that comes with gaze,” she says. “We also use pupil dilation, which has been related to cognitive load and some affective states such as confusion.”

Machine learning uses this data to predict a variety of user states, including affective states such as confusion and boredom. It can also predict how a user is learning.

“It allows us to understand metacognitive processes, for instance how much the user is engaging when they’re interacting with learning material and how much they’re engaging with self-explanation — a cognitive skill that relates to someone really trying to go deeper in the material they’re reading or studying.”

Eye tracking and gaze data are crucial in this respect, she says. This is because the interaction is strictly perceptual and the only useful data for learner support comes from how the user looks at the information. A study she did that’s slated for publication illustrates the link from gaze and eye tracking data to learner support: Subjects who are reading difficult material that’s illustrated with numerous graphs show improved comprehension when the system highlights a graph that illustrates the passage they are reading.

The major use of AI in education is currently intelligent tutoring systems, which still are in an early stage of development.

Successful systems to date include tutors that help students practice and acquire problem solving skills and systems that improve the learner’s metacognitive abilities through self-monitoring.  There is also research that seeks to create motivational environments through games and interactive simulations and work that builds the interactive relationship through the use of explanations. This lends itself to learners who need to know the “why” behind the “what”.

The goal of Christina’s research is to personalize education in a way that is transparent and maintains user control and trust. She’s currently looking at the use of hints and explanations.

 “My interest is in determining when it’s actually important to provide explanations —  which users want explanations and can use them in a way that increases their trust in the technology they’re using,” she says.

One of her research projects deals with an interactive simulation of an intelligent educational environment that helps learners understand a specific algorithm that solves constraint satisfaction problems. The research monitors how each individual uses hints, examples and explanations; whether they think they are useful, and whether they affect learning.

The system’s AI differentiates positive behaviors that are conducive to learning and negative behaviors that are not.

The team picked three principles to use in supplying explanations: they should be incremental; users should be free to explore different levels of detail, and they should provide an accurate description of the underlying AI without becoming overwhelming. (This is a challenge, since the explanations are generated from a complex mixture of data mining, machine learning and AI techniques.)

The study shows some evidence that the availability of personalized help improves the benefit learners receive from the system and that there is a lot of variation among subjects in requests and usage of hints and explanations.

“The long-term goal will be to create a better understanding of which learners need the explanations so that we can add to the AI in the system not just the ability to provide personalized hints, but also the ability to provide personalized explanations to the learners that need them the most,” Christina says.

“The long-term vision is to create personalized, trust-aware educational environments that can understand what’s happening with user trust and also are capable of understanding how to use explanations to foster this trust.”