File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김형훈

Kim, Hyounghun
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Dublin, IRELAND -
dc.citation.endPage 118 -
dc.citation.startPage 113 -
dc.citation.title Workshop on Insights from Negative Results in NLP -
dc.contributor.author Kim, Hyounghun -
dc.contributor.author Padmakumar, Aishwarya -
dc.contributor.author Jin, Di -
dc.contributor.author Bansal, Mohit -
dc.contributor.author Hakkani-Tur, Dilek -
dc.date.accessioned 2024-01-31T20:35:48Z -
dc.date.available 2024-01-31T20:35:48Z -
dc.date.created 2022-10-21 -
dc.date.issued 2022-05-26 -
dc.description.abstract Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes. We experiment with augmenting a transformer model for this task with modules that effectively utilize a wider field of view and learn to choose whether the next step requires a navigation or manipulation action. We observed that the proposed modules resulted in improved, and in fact state-of-the-art performance on an unseen validation set of a popular benchmark dataset, ALFRED. However, our best model selected using the unseen validation set underperforms on the unseen test split of ALFRED, indicating that performance on the unseen validation set may not in itself be a sufficient indicator of whether model improvements generalize to unseen test sets. We highlight this result as we believe it may be a wider phenomenon in machine learning tasks but primarily noticeable only in benchmarks that limit evaluations on test splits, and highlights the need to modify benchmark design to better account for variance in model performance. -
dc.identifier.bibliographicCitation Workshop on Insights from Negative Results in NLP, pp.113 - 118 -
dc.identifier.scopusid 2-s2.0-85137479249 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/75879 -
dc.identifier.url https://aclanthology.org/2022.insights-1.15/ -
dc.identifier.wosid 000846896600015 -
dc.language 영어 -
dc.publisher ASSOC COMPUTATIONAL LINGUISTICS-ACL -
dc.title On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets -
dc.type Conference Paper -
dc.date.conferenceDate 2022-05-26 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.