Journaling is a powerful tool for meaning-making, enabling individuals to interpret life experiences and construct personal narratives. However, this reflective process is cognitively demanding and often difficult to sustain without support. While Large Language Models (LLMs) show promise as writing partners, current human-AI co-writing paradigms primarily focus on productivity and fluency. Applying these productivity-oriented models to intimate, autobiographical writing creates a critical tension: how to provide interpretive support that deepens reflection without displacing the writer’s authentic voice or undermining their interpretive agency.
This thesis presents Augmentiary, an AI-augmented journaling system that embeds interpretive support directly into users’ diary entries. Drawing on a three-day formative probe with eight experienced diarists, the study derives four design goals for LLM-based meaning-making support: providing alternative interpretations for comparison, preserving the writer’s voice, supporting writer-led engagement, and attuning interpretations to personal values and autobiographical continuity. These goals are instantiated in a web-based system that offers on-demand, sentence-level, deliberately incomplete suggestions through two features—Perspective-Expanding and Dot-Connecting—and visual attribution cues whose prominence fades as users edit AI-generated text.
To examine how such interpretive support is experienced in practice, a four-week field deployment is conducted with 25 participants undergoing diverse life transitions. With semi-structured interviews, the study applies inductive, constructionist thematic analysis to characterize how Augmentiary shapes journaling practices, reflection processes, and perceived agency.
Findings show that participants did not treat AI's interpretive suggestions as authoritative answers but as comparison points in an inner dialogue, using both agreement and resistance to clarify what their experiences meant. Interpretive support helped participants elaborate on their narratives, revisit past experiences, and act on emerging insights; however, repetitive or misaligned suggestions, as well as the effort required to elaborate on them, sometimes constrained reflection.
This thesis contributes: (1) a conceptual framing of LLMs in reflective journaling as inner dialogic collaborators rather than productivity-oriented co-authors; (2) the design and implementation of Augmentiary, which applied four design goals for interpretive support in reflective journaling for meaning-making; (3) empirical insights into how people accept, appropriate, and resist AI's interpretive suggestions in everyday journaling; and (4) design implications for AI-mediated meaning-making systems that emphasize collaboration for catalyzing inner dialogue, attribution of AI-authored text in personal writing, interface design that leaves room for users' reflective engagement on AI-generated output, and the importance of consideration of human-AI role of interpretive agency.
Publisher
Ulsan National Institute of Science and Technology