Graduation Year

2024

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Computer Science and Engineering

Major Professor

Shaun Canavan, Ph.D.

Committee Member

Lawrence Hall, Ph.D.

Committee Member

Sudeep Sarkar, Ph.D.

Committee Member

Lijun Yin, Ph.D.

Committee Member

Yasin Yilmaz, Ph.D.

Committee Member

Fallon R. Goodman, Ph.D.

Keywords

Affective computing, behavior analytics, computer vision, contextualization and personalization, machine learning, network science

Abstract

Affective computing (AC) is a sub-domain of AI that has the potential to assist people by assessing mental states and making appropriate recommendations to patients, loved ones, caregivers, and domain experts. Humans usually produce an enormous amount of data (such as face videos) every day. One of the major challenges for affective computer vision is to efficiently deal with high volumes of data to facilitate automated model development. To cope with this challenge, we developed computer vision algorithms that measure the expressivity of the human face from video data. More precisely, the developed algorithms can map complex affect information from unstructured video data to 1D time-series data which can be used to perform downstream tasks. This work enables large-scale affective visual data analytics. Some use cases follow: i) it can quantify differences among users in terms of facial expressiveness which is crucial since people are different; ii) it can enable data quality inspection which is essential since data collected using different devices in different scenarios can have biases which in turn may induce biases in an AI model.

To push the boundary even further, we proposed network-centric modeling of human affective behavior in which we model affective behavior as a spatio-temporal graph to measure the dynamics of facial expressiveness as a whole system. In our experimental results on several publicly available datasets for facial muscle movements and affect reports, we found that the model is effective in visualizing and extracting insights from the derivative and partially neutralized affect data, using graph analytics. Using the measured network-centric characteristics of the data, we found the distributional shift in data based on reporting perspective/context (e.g., self, observer), and data collection sources (e.g., position and orientation of camera used to record data). These models provide utility for data quality inspection on affective video data for ML development and affect analytics.

This work proposes a machine learning (ML) model incorporating context awareness for human behavior modeling and assessment given that context, both external (e.g., environment) and internal (e.g., physical/emotional state), impacts how humans perceive and predict emotional behaviors. For instance, we found for patients with chronic lower back (CLB) pain, the painful behavior is statistically significantly associated with protective behaviors, and it is modulated by contextual and subjective factors. We further observed that considering this association along with its factors significantly improves the performance of ML models for behavior perception. Even though context-aware modeling makes the model more sound and improves predictive performance, context-aware modeling comes with its own challenges due to the evolving nature of contextual behaviors. In our experimental results, we found that model performance degrades when the evolving nature is not accounted for, due to data distribution shift. In addition, recent ethical guidelines recommend responsible communication and contextual calibration for developing personalized context-aware AC applications. Hence, we proposed a cooperative learning framework for affective behavior (e.g., pain) assessment that can improve user trust by involving users in the development process. In addition, the proposed approach is resource-efficient as it automates the labeling process while asking users when it requires assistance. Furthermore, we obtained competitive performance in the benchmark dataset using a limited number of training examples compared to the previous methods for CLB pain perception leveraging the patient's body and muscle movement. We also observed a major boost in predictive performance by incorporating personalization in the framework. This work also has the potential to deal with challenges in ML development due to data scarcity scenarios in healthcare. Finally, we further improved the context-aware model by leveraging continual learning and human feedback, addressing the limitation of cooperative learning in terms of training, for better resource efficiency (retraining time, memory usage), and reducing human intervention time.

Share

COinS