Multimodal Neurocognitive and Neurohormonal Assessment of Human Behavior
Friday, March 6, 2026
12:00 PM-2:00 PM
BIOMED PhD Thesis Defense
Title:
Multimodal Neurocognitive and Neurohormonal Assessment of Human Behavior During Naturalistic Human-Robot Interactions
Speaker:
Yigit Topoglu, PhD Candidate
School of Biomedical Engineering, Science and Health Systems
Drexel University
Advisor:
Hasan Ayaz, PhD
Professor
School of Biomedical Engineering, Science and Health Systems
Drexel University
Details:
Advancing the understanding of real-world human experience through a comprehensive assessment of integrated brain-body-behavior framework is a primary goal of neuroergonomics, a field that converges at the intersection of cognitive neuroscience, biomedical engineering, psychology, human factors, and more. The neuroergonomic approach utilizes increasingly mobile and wearable neuroimaging to record biomedical signals alongside complex behavior to provide the opportunity for understanding the continuum of neural, physiological, psychological, and behavioral mechanisms during everyday settings and real-world scenarios.
The objective of this dissertation is to apply the neuroergonomic approach by combining wearable neuroimaging using functional near-infrared spectroscopy (fNIRS), with hormonal measures using oxytocin (OT), self-reported surveys, and behavioral measures for the assessment of human social mechanisms towards humanoid robots. Specifically, this work targets naturalistic face-to-face human-robot interactions (HRI) in diverse scenarios such as social conversations and collaborative teaming. The utilization of autonomous robots in everyday settings is exponentially increasing that influences the daily lives of interacting individuals and society at large. Hence, the need for designing robots that are socially adept with human users has been one of the biggest challenges of the HRI field. To be able to find the factors and features of robots that optimize HRI with users, there is a need to reach a deeper understanding the human social mechanisms during interaction with robots. Yet, gaps persist in our understanding of the neural mechanisms in immersive real HRI settings, as the traditional HRI assessment methods, such as behavioral and self-reported measures, lack the ability to capture the user’s cognitive state that includes mental effort and intent in the moment of interaction. Integrating biomarkers such as cortical activity using mobile neuroimaging and OT measurements with traditional HRI methods can provide extensive insights into how human social mechanisms evolve during naturalistic HRI. The prefrontal cortex (PFC) activity is related to processing social information and collaborative decision-making, and OT is associated with social bonding and trust. The joint integration of such complementary measures offers the potential to enhance the understanding of social and teaming dynamics of HRI beyond using either approach individually and aligns well with the emerging field of neuroergonomics, which aims to study the brain and body in everyday settings. This multimodal approach can provide valuable insights into the design and development of more socially adept robots in the future.
This dissertation provides several novel contributions to a knowledge base that can further both human-robot conversations and teaming scenarios. In the first aim, we explored conversational naturalistic HRI, where the robot acts as a dialogue partner, and looked at the effects of the robot’s expressiveness (animated vs. stationary) and performance (congruent vs erroneous) on the users’ behavior towards the robot using the neural, physiological, and behavioral correlates. We found that robot errors reduced perceived trust and influence with elevated OT levels, while animated robots increased PFC activity during the presence of errors. In addition, we found that OT acts as a social salience signal rather than a marker of bonding, especially when the robot is animated. This indicates that robot expressiveness promotes social expectations towards the robot while enabling participants’ vigilance, making robot errors more perceptible. In the second aim, we investigated the short-term collaborative HRI, where the user teams up with the robot on a series of collaborative mental and physical tasks, examining the impact of robot performance and task difficulty. We found that the congruent robot promoted PFC activity, trust, rapport, and task performance in mental short-term collaborative scenarios, while harder tasks elicited higher PFC activity in the physical short-term collaborative scenarios. Together with the results of the first aim, this suggests that the effect of robot performance on PFC activity is contextual: goal-oriented collaboration promotes task engagement, and socially evaluative settings trigger error observation. In the third aim, we expanded our approach to a longitudinal collaborative HRI scenario to evaluate the impact of robot performance during human-robot teaming over 4 back-to-back days. The results confirmed findings in the second aim. In addition, participants who interacted first with the congruent robot had lower PFC activity when they switch to the erroneous robot. In addition, participants that interacted first with the erroneous robot showed higher self-reported rapport and pro-sociality when they switch to the congruent robot. These findings highlight the importance of the robot’s first impression on the human robot teaming dynamics.
Collectively, this work will serve as a blueprint for a more comprehensive neuroergonomic assessment of human social behavior in HRI and provide insights for next-generation socially adept robot design. By revealing the complex interplay between neural, hormonal, psychological, and behavioral mechanisms towards social robots, we offer a new perspective on the future of human-robot interaction research. Beyond robotics, our findings hold cross-disciplinary significance, bridging cognitive neuroscience, engineering, and neuroergonomics at large. As artificial agents become increasingly integrated into society, understanding their impact on human cognition and behavior is critical for designing ethical, psychologically compatible social robots. Our research provides a roadmap for engineering robots that align with human neurobiological processes, offering key insights for the next generation of socially aware robotics.
Contact Information
Natalia Broz
njb33@drexel.edu