File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

노삼혁

Noh, Sam H.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.title USENIX Workshop on Hot Topics in Storage and File Systems -
dc.contributor.author Kim, Byungseok -
dc.contributor.author Kim, Jaeho -
dc.contributor.author Noh, Sam H. -
dc.date.accessioned 2023-12-19T18:37:56Z -
dc.date.available 2023-12-19T18:37:56Z -
dc.date.created 2018-09-11 -
dc.date.issued 2017-07-11 -
dc.description.abstract With the advent of high performing NVMe SSDs, the bottleneck of system performance is shifting away from the traditional storage device. In particular, the I/O stack software layers have already been recognized as a heavy burden on the overall I/O. Efforts to alleviate this burden have been considered. Recently, the spotlight has been on the CPU. With computing capacity as well as the means to get the data to the processor now being limited, recent studies have suggested that processing power be pushed into where the data is residing. With devices such as 3D XPoint in the horizon, this phenomenon is expected to be aggravated.
In this paper, we focus on another component related to such changes. In particular, it has been observed that the bandwidth of the network that connects clients to storage servers is now being surpassed by storage bandwidth. Figure 1 shows the changes that are happening. We observe that the changes in the storage interface is allowing storage bandwidth to surpass that of the network. As shown in Table 1, recent developments in SSDs have resulted in individual SSDs providing read and write bandwidth in the 5GB/s and 3GB/s range, respectively, which surpasses or is close to that of 10/25/40GbE (Gigabit Ethernet) that comprise the majority of networks being supported today.
Based on this observation, in this paper, we revisit the organization of disk arrays. Specifically, we target write performance in all-flash arrays, which we interchangeably refer to as SSD arrays, that are emerging as a solution for high-end storage. As shown in Table 2, most major storage vendors carry such a solution and these products employ plenty of SSDs to achieve large capacity and high performance. Figure 2 shows how typical all-flash arrays would be connected to the network and the host. Our goal is to provide high, sustained, and consistent write performance in such a storage environment.
-
dc.identifier.bibliographicCitation USENIX Workshop on Hot Topics in Storage and File Systems -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/32752 -
dc.identifier.url https://www.usenix.org/conference/hotstorage17/program/presentation/kim -
dc.language 영어 -
dc.publisher USENIX -
dc.title Managing Array of SSDs When the Storage Device is No Longer the Performance Bottleneck -
dc.type Conference Paper -
dc.date.conferenceDate 2017-07-10 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.