<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection:</title>
  <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/147" />
  <subtitle />
  <id>https://scholarworks.unist.ac.kr/handle/201301/147</id>
  <updated>2026-04-20T04:08:48Z</updated>
  <dc:date>2026-04-20T04:08:48Z</dc:date>
  <entry>
    <title>Thor's Hammer: An Ungrounded Force Feedback Device Utilizing Propeller-Induced Propulsive Force</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/91162" />
    <author>
      <name>Heo, Seongkook</name>
    </author>
    <author>
      <name>Chung, Christina</name>
    </author>
    <author>
      <name>Lee, Geehyuk</name>
    </author>
    <author>
      <name>Wigdor, Daniel</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/91162</id>
    <updated>2026-03-31T05:30:11Z</updated>
    <published>2018-04-20T15:00:00Z</published>
    <summary type="text">Title: Thor's Hammer: An Ungrounded Force Feedback Device Utilizing Propeller-Induced Propulsive Force
Author(s): Heo, Seongkook; Chung, Christina; Lee, Geehyuk; Wigdor, Daniel
Abstract: We present a new handheld haptic device, Thor's Hammer, which uses propeller propulsion to generate ungrounded, 3-DOF force feedback. Thor's Hammer has six motors and propellers that generates strong thrusts of air without the need for physical grounding or heavy air compressors. With its location and orientation tracked by an optimal tracking system, the system can exert forces in arbitrary directions regardless of the device's orientation. Our technical evaluation shows that Thor's Hammer can apply up to 4 N of force in arbitrary directions with less than 0.11 N and 3.9° of average magnitude and orientation errors. We also present virtual reality applications that can benefit from the force feedback provided by Thor's Hammer. Using these applications, we conducted a preliminary user study and participants felt the experience more realistic and immersive with the force feedback.</summary>
    <dc:date>2018-04-20T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>You Watch, You Give, and You Engage: A Study of Live Streaming Practices in China</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/91161" />
    <author>
      <name>Lu, Zhicong</name>
    </author>
    <author>
      <name>Xia, Haijun</name>
    </author>
    <author>
      <name>Heo, Seongkook</name>
    </author>
    <author>
      <name>Wigdor, Daniel</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/91161</id>
    <updated>2026-03-31T05:30:11Z</updated>
    <published>2018-04-20T15:00:00Z</published>
    <summary type="text">Title: You Watch, You Give, and You Engage: A Study of Live Streaming Practices in China
Author(s): Lu, Zhicong; Xia, Haijun; Heo, Seongkook; Wigdor, Daniel
Abstract: Despite gaining traction in North America, live streaming has not reached the popularity it has in China, where live- streaming has a tremendous impact on the social behaviors of users. To better understand this socio-technological phenomenon, we conducted a mixed methods study of live streaming practices in China. We present the results of an online survey of 527 live streaming users, focusing on their broadcasting or viewing practices and the experiences they find most engaging. We also interviewed 14 active users to explore their motivations and experiences. Our data revealed the different categories of content that was broadcasted and how varying aspects of this content engaged viewers. We also gained insight into the role reward systems and fan group-chat play in engaging users, while also finding evidence that both viewers and streamers desire deeper channels and mechanisms for interaction in addition to the commenting, gifting, and fan groups that are available today.</summary>
    <dc:date>2018-04-20T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>FDSense: Estimating Young's Modulus and Stiffness of End Effectors to Facilitate Kinetic Interaction on Touch Surfaces</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/91160" />
    <author>
      <name>Hong, Sanghwa</name>
    </author>
    <author>
      <name>Jeong, Eunseok</name>
    </author>
    <author>
      <name>Heo, Seongkook</name>
    </author>
    <author>
      <name>Lee, Byungjoo</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/91160</id>
    <updated>2026-03-31T05:30:10Z</updated>
    <published>2018-10-13T15:00:00Z</published>
    <summary type="text">Title: FDSense: Estimating Young's Modulus and Stiffness of End Effectors to Facilitate Kinetic Interaction on Touch Surfaces
Author(s): Hong, Sanghwa; Jeong, Eunseok; Heo, Seongkook; Lee, Byungjoo
Abstract: We make touch input by physically colliding an end effector (e.g., a body part or a stylus) with a touch surface. Prior studies have examined the use of kinematic variables of collision between objects, such as position, velocity, force, and impact. However, the nature of the collision can be understood more thoroughly by considering the known physical relationships that exist between directly measurable variables (i.e., kinetics). Based on this collision kinetics, this study proposes a novel touch technique called FDSense. By simultaneously observing the force and contact area measured from the touchpad, FDSense allows estimation of the Young's modulus and stiffness of the object being contacted. Our technical evaluation showed that FDSense could effectively estimate the Young's modulus of end effectors made of various materials, and the stiffness of each part of the human hand. Two applications using FDSense were demonstrated, for digital painting and digital instruments, where the result of the expression varies significantly depending on the elasticity of the end effector. In a following informal study, participants assessed the technique positively.</summary>
    <dc:date>2018-10-13T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>StreamWiki: Enabling Viewers of Knowledge Sharing Live Streams to Collaboratively Generate Archival Documentation for Effective In-Stream and Post Hoc Learning</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/91159" />
    <author>
      <name>Lu, Zhicong</name>
    </author>
    <author>
      <name>Heo, Seongkook</name>
    </author>
    <author>
      <name>Wigdor, Daniel J.</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/91159</id>
    <updated>2026-03-31T05:30:09Z</updated>
    <published>2018-11-02T15:00:00Z</published>
    <summary type="text">Title: StreamWiki: Enabling Viewers of Knowledge Sharing Live Streams to Collaboratively Generate Archival Documentation for Effective In-Stream and Post Hoc Learning
Author(s): Lu, Zhicong; Heo, Seongkook; Wigdor, Daniel J.
Abstract: Knowledge-sharing live streams are distinct from traditional educational videos, at least because of the large concurrently-viewing audience and the real-time discussions between viewers and the streamer. Though this creates unique opportunities for interactive learning, it also brings a challenge for creating a useful archive for post hoc learning. This paper presents the results of interviews with knowledge sharing streamers, their moderators, and viewers to understand current experiences and needs for sharing and learning knowledge through live streaming. Based on those findings, we built StreamWiki, a tool which leverages the viewers during live streams to produce useful archives of the interactive learning experience. On StreamWiki, moderators initiate tasks that viewers complete by conducting microtasks, such as writing a summary, commenting, and voting for informative comments. As a result, a summary document is built in real time. Through the tests of our prototype with streamers and viewers, we found that StreamWiki could help understanding the content and the context of the stream, during the stream and for post hoc learning.</summary>
    <dc:date>2018-11-02T15:00:00Z</dc:date>
  </entry>
</feed>

