Treffer: Virtual Agent for Real-Time Motivational Interviewing by Integrating Adaptive Nonverbal Behavior and Language Models.

Title:
Virtual Agent for Real-Time Motivational Interviewing by Integrating Adaptive Nonverbal Behavior and Language Models.
Authors:
Galland L; ISIR - Sorbonne University; lucie.galland@isir.upmc.fr., Younsi N; ISIR - Sorbonne University., Baudonne C; University of Bordeaux, cours de la Libération., Chaby L; Vision Action Cognition, Université Paris Cité., Helme-Guizon A; University of Grenoble Alpes, Grenoble INP, CERAG., Pecune F; CNRS - SANPSY, University of Bordeaux., Pelachaud C; CNRS - ISIR, Sorbonne University.
Source:
Journal of visualized experiments : JoVE [J Vis Exp] 2025 Dec 23 (226). Date of Electronic Publication: 2025 Dec 23.
Publication Type:
Journal Article; Video-Audio Media
Language:
English
Journal Info:
Publisher: MYJoVE Corporation Country of Publication: United States NLM ID: 101313252 Publication Model: Electronic Cited Medium: Internet ISSN: 1940-087X (Electronic) Linking ISSN: 1940087X NLM ISO Abbreviation: J Vis Exp Subsets: MEDLINE
Imprint Name(s):
Original Publication: [Boston, Mass. : MYJoVE Corporation, 2006]-
Entry Date(s):
Date Created: 20260112 Date Completed: 20260112 Latest Revision: 20260112
Update Code:
20260113
DOI:
10.3791/69254
PMID:
41525230
Database:
MEDLINE

Weitere Informationen

The growing demand for therapeutic support increasingly exceeds the capacity of available professionals. A virtual agent capable of performing motivational interviewing (MI) offers a promising solution to assist patients in reaching their goal of behavior change between sessions with human therapists. MI is inherently a cooperative and adaptive form of communication. Therefore, developing an agent capable of adapting its conversational strategies to the context could significantly enhance the effectiveness of therapy. During MI sessions, human therapists adjust both their verbal and nonverbal behaviors based on the human patients' responses, as well as their profiles. Depending on the patient's level of motivation, the therapist will modify their approach accordingly. Thus, personalization and adaptability are essential for developing effective MI virtual agents. In this paper, we present a virtual agent capable of conducting MI sessions by dynamically adapting verbally and nonverbally to users in real time. Leveraging state-of-the-art models, this system enables MI interactions. The virtual agent is embodied using the Greta 2.0 platform. Its nonverbal behavior is generated through a diffusion model called MODIFF, which adapts to the user's facial expressions and their readiness to change. These facial expressions were learned on an MI corpus and validated through a dedicated user study. The dialogue is generated using a state-of-the-art large language model (LLM), enhanced by a dialogue manager specifically designed for MI, with a reinforcement learning approach, and validated through user testing. Furthermore, the dialogue manager is able to adapt to different user profiles. The resulting platform is open-source and facilitates the generation of real-time, multimodal MI dialogues, providing new tools for digitally mediated therapeutic interactions.