File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김지윤

Kim, Jiyun
Material Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number 1 -
dc.citation.startPage 530 -
dc.citation.title NATURE COMMUNICATIONS -
dc.citation.volume 15 -
dc.contributor.author Lee, Jin Pyo -
dc.contributor.author Jang, Hanhyeok -
dc.contributor.author Jang, Yeonwoo -
dc.contributor.author Song, Hyeonseo -
dc.contributor.author Lee, Suwoo -
dc.contributor.author Lee, Pooi See -
dc.contributor.author Kim, Jiyun -
dc.date.accessioned 2024-02-07T18:05:13Z -
dc.date.available 2024-02-07T18:05:13Z -
dc.date.created 2024-02-04 -
dc.date.issued 2024-01 -
dc.description.abstract AbstractHuman affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional information. Here, we develop a multi-modal human emotion recognition system which can efficiently utilize comprehensive emotional information by combining verbal and non-verbal expression data. This system is composed of personalized skin-integrated facial interface (PSiFI) system that is self-powered, facile, stretchable, transparent, featuring a first bidirectional triboelectric strain and vibration sensor enabling us to sense and combine the verbal and non-verbal expression data for the first time. It is fully integrated with a data processing circuit for wireless data transfer allowing real-time emotion recognition to be performed. With the help of machine learning, various human emotion recognition tasks are done accurately in real time even while wearing mask and demonstrated digital concierge application in VR environment. -
dc.identifier.bibliographicCitation NATURE COMMUNICATIONS, v.15, no.1, pp.530 -
dc.identifier.doi 10.1038/s41467-023-44673-2 -
dc.identifier.issn 2041-1723 -
dc.identifier.scopusid 2-s2.0-85182473640 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/81335 -
dc.identifier.url https://www.nature.com/articles/s41467-023-44673-2 -
dc.identifier.wosid 001143918100011 -
dc.language 영어 -
dc.publisher NATURE PORTFOLIO -
dc.title Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Multidisciplinary Sciences -
dc.relation.journalResearchArea Science & Technology - Other Topics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordPlus RECOGNITION -
dc.subject.keywordPlus NANOGENERATOR -
dc.subject.keywordPlus VOICE -
dc.subject.keywordPlus FACE -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.