File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

남범석

Nam, Beomseok
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

In-memory Caching Orchestration for Hadoop

Author(s)
Kwak, JaewonHwang, EunjiYoo, Tae-kyungNam, BeomseokChoi, Young-Ri
Issued Date
2016-05-17
DOI
10.1109/CCGrid.2016.73
URI
https://scholarworks.unist.ac.kr/handle/201301/32801
Fulltext
http://ieeexplore.ieee.org/document/7515674/?arnumber=7515674
Citation
IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, pp.94 - 97
Abstract
In this paper, we investigate techniques to effectively orchestrate HDFS in-memory caching for Hadoop. We first evaluate a degree of benefit which each of various MapReduce applications can get from in-memory caching, i.e. cache affinity. We then propose an adaptive cache local scheduling algorithm that adaptively adjusts the waiting time of a MapReduce job in a queue for a cache local node. We set the waiting time to be proportional to the percentage of cached input data for the job. We also develop a cache affinity cache replacement algorithm that determines which block is cached and evicted based on the cache affinity of applications. Using various workloads consisting of multiple MapReduce applications, we conduct experimental study to demonstrate the effects of the proposed in-memory orchestration techniques. Our experimental results show that our enhanced Hadoop in-memory caching scheme improves the performance of the MapReduce workloads up to 18% and 10% against Hadoop that disables and enables HDFS in-memory caching, respectively.
Publisher
IEEE/ACM

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.