Keir Fraser wrote:
We could extend the concept of the interface shim we already have -- a
set of OS-specific high performance shims, plus a fallback OS-agnostic
shim.
Currently the lack of a shim is the key difference between the VMI and
Xen approaches. Forgive me for summarizing, but I'm not sure it's been
made clear. The VMI is the interface between the OS and a shim layer--it
is not a hypervisor interface. The kernel makes VMI calls to the shim
and the shim makes hypercalls, if needed, to the hypervisor.
VMI VMI native Xen/Xen native
OS OS OS
-------------- VMI -------------- VMI
Shim (ROM)
-------------- HV API -------------- HV API
Hypervisor Native HW Hypervisor
The VMI isolates the kernel from the hypervisor so that the kernel and
the hypervisor can evolve w/o hindering each other's development. The
Xen approach still tightly couples the hypervisor with the kernel.
Coupling the kernel and hypervisor together restricts their evolution
and people who want to run different operating systems (or different
versions of the same OS) on the same hypervisor. As Josh pointed out,
you can run a single VMI Linux kernel on more versions of the Xen
hypervisor than you can using a single XenLinux kernel because the VMI
does not require a tight coupling.
Tight coupling also means you end up using a hypervisor when running a
kernel natively (e.g. "supervisor mode kernel" in the unstable Xen
repository). So for the native case you get a level of indirection (the
hypervisor) that costs you performance, and for the virtual case you do
not get a level of indirection (a shim) that buys you compatibility and
diversity. For VMI, it's the reverse, you get the level of indirection
in the virtual case and no indirection in the native case. You could
have separate kernels, and all the associated costs, for these two cases.
There are many places where the VMI and Xen patches overlap; the key
difference is that the VMI makes a distinction between the kernel and
the hypervisor interfaces. As others have pointed out this distinction
buys you a lot in terms of compatibility, ease of maintenance, and the
ability to execute the same kernel in native and virtual environments
with high performance.
Which particular bits get in is less important than the decision of
whether or not the Linux community wants the kernel tightly coupled to
the hypervisor. Extending the hypercall page you already have to
decouple the hypervisor and kernel interfaces would be excellent.
Eli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]