File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

임성훈

Lim, Sunghoon
Industrial Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.startPage 115400 -
dc.citation.title EXPERT SYSTEMS WITH APPLICATIONS -
dc.citation.volume 183 -
dc.contributor.author Choi, Jae Gyeong -
dc.contributor.author Kong, Chan Woo -
dc.contributor.author Kim, Gyeongho -
dc.contributor.author Lim, Sunghoon -
dc.date.accessioned 2023-12-21T15:08:50Z -
dc.date.available 2023-12-21T15:08:50Z -
dc.date.created 2021-06-19 -
dc.date.issued 2021-11 -
dc.description.abstract Due to the increase in motor vehicle accidents, there is a growing need for high-performance car crash detection systems. The authors of this research propose a car crash detection system that uses both video data and audio data from dashboard cameras in order to improve car crash detection performance. While most existing car crash detection systems depend on single modal data (i.e., video data or audio data only), the proposed car crash detection system uses an ensemble deep learning model based on multimodal data (i.e., both video and audio data), because different types of data extracted from one information source (e.g., dashboard cameras) can be regarded as different views of the same source. These different views complement one another and improve detection performance, because one view may have information that the other view does not contain. In this research, deep learning techniques, gated recurrent unit (GRU) and convolutional neural network (CNN), are used to develop a car crash detection system. A weighted average ensemble is used as an ensemble technique. The proposed car crash detection system, which is based on multiple classifiers that use both video and audio data from dashboard cameras, is validated using a comparison with single classifiers that use video data or audio data only. Car accident YouTube clips are used to validate this research. The experimental results indicate that the proposed car crash detection system performs significantly better than single classifiers. It is expected that the proposed car crash detection system can be used as part of an emergency road call service that recognizes traffic accidents automatically and allows immediate rescue after transmission to emergency recovery agencies. -
dc.identifier.bibliographicCitation EXPERT SYSTEMS WITH APPLICATIONS, v.183, pp.115400 -
dc.identifier.doi 10.1016/j.eswa.2021.115400 -
dc.identifier.issn 0957-4174 -
dc.identifier.scopusid 2-s2.0-85109168741 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/53109 -
dc.identifier.url https://www.sciencedirect.com/science/article/pii/S095741742100823X?via%3Dihub -
dc.identifier.wosid 000692066300010 -
dc.language 영어 -
dc.publisher PERGAMON-ELSEVIER SCIENCE LTD -
dc.title Car crash detection using ensemble deep learning and multimodal data from dashboard cameras -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Artificial IntelligenceEngineering, Electrical & ElectronicOperations Research & Management Science -
dc.relation.journalResearchArea Computer ScienceEngineeringOperations Research & Management Science -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Dashboard cameraCar crashMultimodal dataDeep learningEnsemble technique -
dc.subject.keywordPlus NEURAL-NETWORKSYSTEM -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.