top of page

Between Reflection and Bias: Designing Philosophical AI for Ethical Dialogue

image 512.png

Project Overview

People increasingly rely on generative AI when thinking through difficult moral choices in areas like healthcare, law, and education.

Research Question

How can conversational AI help users examine their reasoning rather than simply confirm existing beliefs?

Approach

I designed six AI personas grounded in ethical traditions such as utilitarianism, Kantian ethics, Confucianism, Buddhism, and Christianity.


All personas used the same underlying model but differed in moral framing, encouraging reflection rather than direct answers.

Study Design 

Qualitative HCI study

- 21 participants (ages 20–26)

- 12 real-world moral dilemmas across multiple domains

- Think-aloud protocol &  post-session interviews

Screenshot 2025-12-02 at 5.57.08 pm.png

Study procedure showing pre-test, repeated dilemma–persona interactions, and post-interview.

Key Findings

1. Reflection often turned into validation

2. Trust depended on strict persona consistency

3. Culture shaped trust, emotion shaped decisions

Design Implications

Reflection requires more than disagreement: trust depends on consistent personas, and ethical reasoning is shaped by emotion, context, and cultural familiarity.

Group 1739334734.png

Example moral dilemma and its contextual variant used in the study.

Previous Project
Next Project
bottom of page