File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김지윤

Kim, Jiyun
Material Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface

Author(s)
Lee, Jin PyoJang, HanhyeokJang, YeonwooSong, HyeonseoLee, SuwooLee, Pooi SeeKim, Jiyun
Issued Date
2024-01
DOI
10.1038/s41467-023-44673-2
URI
https://scholarworks.unist.ac.kr/handle/201301/81335
Citation
NATURE COMMUNICATIONS, v.15, no.1, pp.530
Abstract
AbstractHuman affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional information. Here, we develop a multi-modal human emotion recognition system which can efficiently utilize comprehensive emotional information by combining verbal and non-verbal expression data. This system is composed of personalized skin-integrated facial interface (PSiFI) system that is self-powered, facile, stretchable, transparent, featuring a first bidirectional triboelectric strain and vibration sensor enabling us to sense and combine the verbal and non-verbal expression data for the first time. It is fully integrated with a data processing circuit for wireless data transfer allowing real-time emotion recognition to be performed. With the help of machine learning, various human emotion recognition tasks are done accurately in real time even while wearing mask and demonstrated digital concierge application in VR environment.
Publisher
NATURE PORTFOLIO
ISSN
2041-1723
Keyword
RECOGNITIONNANOGENERATORVOICEFACE

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.