
gration is similar to closing the lid on a laptop, moving the
laptop, and re-opening the lid. Without mobile IP [12] tech-
nology the change in network topology is not truly transpar-
ent, but the change can be handled smoothly by the guest
operating system.
VM migration is also tolerant of greater disparity be-
tween the source and target systems across which migra-
tion occurs. Independent of our work, Chen and Noble
have also made this observation recently [2]. For process
migration to succeed, there has to be a very close match
between host and target operating systems, language run-
time systems, and so on. In contrast, VM migration only
requires a compatible VMM and hardware architecture at
the target.The price for this greater flexibility is that VM
migration may involve the transfer of more state.
A distributed file system can serve as the transport mech-
anism for propagating suspended VM state across space and
time. The obvious approach would be to place the relevant
files in the shared name space of the distributed file system
and configure the VMM to access that name space directly.
However, we have identified a number of file system en-
hancements that may prove to be useful in an ISR environ-
ment (Section 4). Rather than attempt to implement these
enhancements in a distributed file system directly, we have
adopted a more flexible approach for early experimentation.
In this approach, the VMM only accesses files in the local
files system. VM files are explicitly copied into the local
file system during resume operations and out of the local
file system during suspend operations.
Another benefit of the explicit copyin/copyout approach
is that we can experiment with several different distributed
file systems and accommodate their various idiosyncrasies.
As an example, many distributed file systems do not pre-
cisely emulate POSIX file system semantics for reasons
of scalability and performance. Consequently, the im-
plementations of many system calls may diverge in sub-
tle ways from their POSIX specifications. Our explicit
copyin/copyout approach decouples the implementation
complexities of distributed file systems and VMMs.
3 Initial Implementation
We have built an initial proof-of-concept implementation
using VMware Workstation 3.0 [7] as the VMM and NFS as
the distributed file system. VMware Workstation is a mod-
ern, commercial VMM which executes on a PC platform
and provides a VM abstraction identical to a PC. VMware
Workstation runs within a host operating system and relies
on it for common system services such as device manage-
ment. Both Linux and Windows 2000/XP are supported as
host operating systems.
VMware Workstation supports many operating systems
as guests including Windows 95/98, Windows 2000/XP, and
Linux. A user can configure many important parameters
that define the VM including the amount of memory, size
and arrangement of disk drives, and number of network
adapters. The video output of the VM may either appear
as a window on the desktop of the host or occupy the entire
host screen.
Note that, in our work, the distributed file system is used
to transport the VM state files; it is not necessarily visible
to the guest operating system. Similarly, while the VMM
stores the contents of a VM’s virtual disks in local files, it
is not capable of interpreting the data stored in those files.
The contents of the virtual disks, and hence the guest file
system, are opaque from the point-of-view of the VMM.
3.1 Test System
In our test environment we emulate a common VM usage
model: the host operating system is Linux and the guest
operating system is Windows XP. We have configured our
guest with 128 MB of main memory, a single Ethernet card,
and a 2 GB virtual disk. We consider this to be a modest
configuration. The memory and disk sizes are sufficiently
large that Windows XP runs out-of-the-box, but sufficiently
small that the host operating system may easily manage the
VM state files (Section 3.2).
The test system hardware and arrangement is depicted
in Figure 1. Assuming that our experimental VM, testvm,
is inactive (i.e. it has previously been suspended), the
VM state associated with testvm is stored on the server in
the NFS share, /export/testvm. If a user at Client
1 wishes to resume execution of the VM, she invokes a
simple Linux script which implements the copyin opera-
tion described in Section 2. This script retrieves the VM
state stored on the server and reconstructs that state in the
local file system of Client 1 under /tmp/testvm.Af-
ter the VM state has been reconstructed, the script launches
VMware. We refer to the copyin operation and the applica-
tion launch together as the “resume event.”
When the user is finished with testvm, she suspends the
VM. When the VMM exits, the copyout script copies the
VM state to the server, possibly altering the format of the
files in transit. The VM may now be migrated to Client
2, if desired. We use the term “suspend event” to refer to
the combination of local VM state save by the VMM and
copyout.
In our implementation, the copyout script divides the
VM state files into 16 MB chunks and stores them in the
NFS share. The copyin script reassembles file chunks into
the local file system (/tmp) on the client machine. We im-
plemented this chunking operation in anticipation of layer-
ing our work on other distributed file systems that do not
handle multi-gigabyte files very well.
2
Proceedings of the Fourth IEEE Workshop on Mobile Computing Systems and Applications (WMCSA’02)
0-7695-1647-5/02 $17.00 © 2002 IEEE