File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김관명

Kim, KwanMyung
Intergration and Innovation Design Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace 플러리다 올랜도 -
dc.citation.endPage 76 -
dc.citation.startPage 70 -
dc.citation.title AHFE2025 -
dc.contributor.author Kin, JoungHyun -
dc.contributor.author Kim, KwanMyung -
dc.date.accessioned 2026-01-12T14:36:05Z -
dc.date.available 2026-01-12T14:36:05Z -
dc.date.created 2026-01-11 -
dc.date.issued 2025-07-27 -
dc.description.abstract LEDs have been increasingly used as expressive mediums in social robotics due to their versatility and efficiency. However, while LEDs effectively convey basic system statuses such as power, battery life, or error alerts, their potential to represent more complex forms of information remains largely unexplored. Current implementations often rely on designers’ intuition rather than a structured methodology, leading to inconsistencies and challenges in user interpretation. To address this gap, this study investigates Information types that can be conveyed through LED-based expressions and establishes a systematic framework for their design. This study categorizes LED-based signals into structured Information types that are either primary (serving as the main communication channel) or redundant (supporting other modalities). Additionally, we distinguish between referential expressions, which rely on contextual understanding, and non-referential expressions, which can be interpreted independently. By defining these categories, this research provides a foundation for enhancing LED-based communication in human-robot interaction (HRI). To build a foundation of this framework, we conducted an ideation study involving six graduate students specializing in product design and social robotics development. Participants explored how different LED design factors—On/Off states, intensity, rhythm, and color—can be manipulated to represent information. The study used a horizontally arranged LED strip to align with human perceptual tendencies, particularly those related to facial expressions and motion perception. The results identified 20 distinct Information types that can be effectively represented using LED-based expressions. These include system notifications, user responses, paralinguistic cues such as laughter or humming, gestures that mimic human movement, and affective states like happiness or surprise. Notably, the study found a direct relationship between the number of LEDs and the complexity of information representation. When the number of LEDs matched the information’s bit-level structure, users interpreted it in a discrete manner. However, when the number of LEDs exceeded the required bits, users perceived the expression holistically rather than as binary signals. For instance, to represent a concept like “water intake,” participants preferred a gradual illumination of LEDs rather than a strict binary encoding. Additionally, our findings suggest that LED-based expressions requiring contextual information for interpretation—such as ambiguous gestures or emotional states—may benefit from multimodal integration with other robotic expressions, such as motion or sound. However, certain gestures, particularly those involving multiple LEDs, were consistently recognized without external cues, suggesting that increasing the number of LEDs enhances independent interpretability. This research contributes to the field of HRI by providing a structured approach to LED-based expressions, improving their clarity, and reducing reliance on designer intuition. By integrating cognitive communication models, this study highlights the importance of aligning LED expressions with human perceptual and interpretative tendencies. Future research should focus on validating these findings through user studies and expanding the framework to incorporate dynamic and multicolor LED interactions. These insights have implications for the design of expressive robotic systems in both social and functional domains. -
dc.identifier.bibliographicCitation AHFE2025, pp.70 - 76 -
dc.identifier.doi 10.54941/ahfe1006428 -
dc.identifier.issn 978-1-958 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/90255 -
dc.identifier.url https://openaccess.cms-conferences.org/publications/book/978-1-964867-59-5/article/978-1-964867-59-5_7 -
dc.language 영어 -
dc.publisher AHFE International -
dc.title.alternative Types of Interaction-Based Information Conveyed Through LED-Based Robot Expressions -
dc.title Types of Interaction-Based Information Conveyed Through LED-Based Robot Expressions -
dc.type Conference Paper -
dc.date.conferenceDate 2025-07-26 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.