TY - GEN
T1 - Fast live migration with small IO performance penalty by exploiting SAN in parallel
AU - Akiyama, Soramichi
AU - Hirofuchi, Takahiro
AU - Takano, Ryousei
AU - Honiden, Shinichi
N1 - Publisher Copyright:
© 2014 IEEE.
Copyright:
Copyright 2015 Elsevier B.V., All rights reserved.
PY - 2014/12/3
Y1 - 2014/12/3
N2 - Virtualization techniques greatly benefit cloud computing. Live migration enables a datacenter to dynamically replace virtual machines (VMs) without disrupting services running on them. Efficient live migration is the key to improve the energy efficiency and resource utilization of a datacenter through dynamic placement of VMs. Recent studies have achieved efficient live migration by deleting the page cache of the guest OS to shrink the memory size of it before a migration. However, these studies do not solve the problem of IO performance penalty after a migration due to the loss of page cache. We propose an advanced memory transfer mechanism for live migration, which skips transferring the page cache to shorten total migration time while restoring it transparently from the guest OS via the SAN to prevent IO performance penalty. To start a migration, our mechanism collects the mapping information between page cache and disk blocks. During a migration, the source host skips transferring the page cache but transfers other memory content, while the destination host transfers the same data as the page cache from the disk blocks via the SAN. Experiments with web server and database workloads showed that our mechanism reduced total migration time with significantly small IO performance penalty.
AB - Virtualization techniques greatly benefit cloud computing. Live migration enables a datacenter to dynamically replace virtual machines (VMs) without disrupting services running on them. Efficient live migration is the key to improve the energy efficiency and resource utilization of a datacenter through dynamic placement of VMs. Recent studies have achieved efficient live migration by deleting the page cache of the guest OS to shrink the memory size of it before a migration. However, these studies do not solve the problem of IO performance penalty after a migration due to the loss of page cache. We propose an advanced memory transfer mechanism for live migration, which skips transferring the page cache to shorten total migration time while restoring it transparently from the guest OS via the SAN to prevent IO performance penalty. To start a migration, our mechanism collects the mapping information between page cache and disk blocks. During a migration, the source host skips transferring the page cache but transfers other memory content, while the destination host transfers the same data as the page cache from the disk blocks via the SAN. Experiments with web server and database workloads showed that our mechanism reduced total migration time with significantly small IO performance penalty.
KW - cloud performance
KW - live migration
KW - virtualization
UR - http://www.scopus.com/inward/record.url?scp=84919787158&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84919787158&partnerID=8YFLogxK
U2 - 10.1109/CLOUD.2014.16
DO - 10.1109/CLOUD.2014.16
M3 - Conference contribution
AN - SCOPUS:84919787158
T3 - IEEE International Conference on Cloud Computing, CLOUD
SP - 40
EP - 47
BT - Proceedings - 2014 IEEE 7th International Conference on Cloud Computing, CLOUD 2014
A2 - Kesselman, Carl
PB - IEEE Computer Society
T2 - 7th IEEE International Conference on Cloud Computing, CLOUD 2014
Y2 - 27 June 2014 through 2 July 2014
ER -