Graduation Year
2025
Document Type
Dissertation
Degree
Ph.D.
Degree Name
Doctor of Philosophy (Ph.D.)
Degree Granting Department
Computer Science and Engineering
Major Professor
Tempestt J. Neal, Ph.D.
Committee Member
Shaun Canavan, Ph.D.
Committee Member
Krisitin Kosyluk, Ph.D.
Committee Member
Julia Woodward, Ph.D.
Committee Member
Kingsley A. Reeves Jr., Ph.D.
Keywords
Behavioral Analysis, Cross-Domain, Ethical AI, Feature Analysis, Machine Learning, Qualitative Analysis
Abstract
Deception in mental health settings can undermine therapeutic relationships, compromise treatment efficacy, and impact patient outcomes. Yet, research shows that mental health clinicians often perform no better than chance at detecting deceptive behavior in therapy. Automated deception detection, leveraging artificial intelligence (AI) and multimodal behavioral cues—such as eye gaze, body gestures, and facial expressions—offers a promising alternative. However, most existing research focuses on high-stakes legal contexts, limiting its applicability to mental health settings.
This dissertation addresses this gap by pursuing three key research objectives using a mixed-methods approach. First, we investigate mental health clinicians’ perspectives on AI-assisted deception detection through 20 semi-structured interviews. Qualitative analysis of these interviews explores clinicians’ experiences with client deception, their attitudes toward integrating AI tools, and the perceived feasibility and ethics of using AI in therapeutic practice.Second, we analyze deception-related visual, auditory, gaze, and physiological (VAGP) cues across multiple domains—including mental health—by collecting and annotating a novel dataset. Over 50 participants (45 aged 18–25; 8 aged 30+) completed three video-recorded mock interviews involving deceptive and truthful responses to questions about personal background, job satisfaction, and well-being. VAGP features were extracted using video and wearable sensors to train AI models and evaluate cross-domain performance.
Finally, we conduct focus group sessions with practicing clinicians—many of whom are also mental health clients themselves—to assess the practical, ethical, and relational dynamics of deploying AI-enabled deception detection in therapy.
Key qualitative findings reveal that while deception is often perceived as infrequent, it can pose significant clinical risks, with motivations linked to fear, external pressures, and certain disorders. Quantitatively, feature overlap suggests shared deception-related cues between well-being and crime domains, as well as between biographical and academic domains. This work informs the design of context-sensitive, ethically grounded AI systems to support mental health professionals in understanding, detecting, and addressing deception, ultimately contributing to safer and more effective therapeutic care.
Scholar Commons Citation
King, Sayde Leya, "An Exploratory Analysis of Automated Deception Detection for Mental Health Applications" (2025). USF Tampa Graduate Theses and Dissertations.
https://digitalcommons.usf.edu/etd/10875
