Research Statement
I study how people form and sustain relationships with AI, and how design choices make those relationships healthy, coherent, and useful. My focus is relational AI—especially the tension between user agency and system adaptivity in companionship, wellness, and everyday assistance. I aim to translate these insights into design principles for agents that are legible, stable, and genuinely helpful over time.
Research Trajectory & Interests
Foundations in Health-Centered & Longitudinal Research
My approach to research began in high-stakes medical environments, where a joint project between the University of Minnesota's Medical School and College of Design taught me that in-the-field observation is essential for tools that work under real-world conditions. Recognizing the unique perspective an HCI researcher with direct clinical research experience could bring, I joined the UMN Department of Family Medicine to work on the Preschool Plates NIH R01 study, hands-on, human-centered work with parent–preschooler dyads that deepened my understanding of the ethical and logistical demands of longitudinal research with sensitive populations and continues to inform my design practice (JMIR protocol).
Adaptive Conversational AI
My background in health directly informed my specialization in conversational AI. As a Conversational Designer for a health education LLM chatbot, I conducted user research with medical residents to design adaptive feedback mechanisms that enhanced engagement. This practical experience motivated my master's thesis and the subsequent Kagami project, an experimental platform I designed and built to investigate the core trade-offs in relational AI. This work systematically tested two competing paths to personalization, visible user agency versus covert system mimicry, leading to the discovery of what I term the "Adaptation Paradox."
Finding 1: The Adaptation Paradox & The Power of User Agency
In a preregistered 3×2 experiment (N=162), I found that giving users creative agency, letting them generate their own chatbot avatar, significantly increased rapport (ω²=.040, p=.013). Conversely, a technically superior adaptive language style was paradoxically perceived as less adaptive (d=0.48) and less satisfying than a simple, static persona. This "Adaptation Paradox" suggests that personalization must be legible and attributable to a coherent agent to be effective; invisible mimicry risks feeling incoherent and undermining the user's connection.
Related Publication: The Adaptation Paradox: Agency vs. Mimicry in Companion Chatbots (CHI'26 under review). Preprint
Finding 2: Navigating the Synchrony–Stability Frontier
To explain and resolve the paradox, I developed a computational framework to formalize the trade-off between moment-to-moment linguistic mimicry (synchrony) and long-term persona consistency (stability). My analysis charted a "Pareto frontier" of adaptation policies, identifying strategies that maximize stability with minimal cost to synchrony. This work provides an engineering and evaluation framework for building adaptive chatbots that are both responsive and stable.
Related Publication: Navigating the Synchrony-Stability Frontier in Adaptive Chatbots (IUI'26 under review). Preprint
Future Horizons
Moving forward, I aim to extend this research by exploring the longitudinal dynamics of human-AI relationships. I am particularly interested in how multimodal interactions, incorporating generative avatars, voice, and other modalities, shape user perceptions and behavior over time in contexts like digital companionship, personal wellness, and online dating. My goal is to investigate how we can design AI systems that not only adapt to us, but that we can also grow with, fostering healthier and more transparent long-term partnerships between humans and machines.
Publications & Presentations
Peer-Reviewed Publications
- Brandt, T. J. (2025). Navigating the Synchrony-Stability Frontier in Adaptive Chatbots. (Under review at IUI 2026). [Preprint]
- Brandt, T. J., & Wang, C. X. (2025). The Adaptation Paradox: Agency vs. Mimicry in Companion Chatbots. (Under review at CHI 2026). [Preprint]
- Loth, K. A., Wolfson, J., Barnard, M., Hogan, N., Brandt, T. J., et al. (2025). Examining the Longitudinal Impact of Within- and Between-Day Fluctuations in Food Parenting Practices on Child Dietary Intake: Protocol for a Longitudinal Cohort Study. JMIR Research Protocols. [DOI]
Presentations
- Wang, C., & Brandt, T. J. (2025). Designing AI with a Human-Centered Lens. Presented at the dmi: New Voices Conference.
- Brandt, T. J. (2025). Kagami: Adaptive AI Companion. Presented at the dmi: New Voices Conference.