File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이경호

Lee, Kyungho
Expressive Computing Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Designing Interfaces for Text-To-Image Prompt Engineering Using Stable Diffusion Models: A Human-AI Interaction Approach

Author(s)
Kim, SeonukLee, Kyungho
Issued Date
2023-10-10
URI
https://scholarworks.unist.ac.kr/handle/201301/74533
Citation
IASDR 2023
Abstract
The use of generative artificial intelligence (AI) is more vital ever than before to create new content, especially images. Recent breakthroughs in text-to-image diffusion models showed the potential to drastically change the way we approach image content creation. However, artists still face challenges when attempting to create images that reflect their specific themes and formats, as the current generative systems, such as Stable Diffusion models, require the right prompts to achieve the desired artistic outputs. In this paper, we propose future design considerations to develop more intuitive and effective interfaces that can be used for text-to-image prompt engineering from a human-AI interaction perspective using a data-driven approach. We collected 78,911 posts from the internet community and analyzed them through thematic analysis. Our proposed directions for interface design can help improve the user experience as well as usability, ultimately leading to a more effective, desired image generation process for creators.
Publisher
International Association of Societies of Design Research

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.