Eric W. Biederman wrote:
I don't even want to think about how a kernel module gets far enough
into the kernel to be affected by our vector layout. These are internal
implementation details, without anything exported to modules.
Can I please see the source of the code in vmware that is doing this?
Sorry, that code is not part of the kernel or any kernel module. It is
part of a fixed set of assumptions about the platform coded in the
hypervisor, which is not open source. This code runs completely outside
the scope of Linux, and uses a platform dependent set of IDT software
vectors which are known not to collide with IDT IRQ vectors. We use
these software vectors for internal purposes; they are never visible to
any Linux software, but are handled and trapped by the hypervisor.
Nevertheless, since we must distinguish between software IRQs and
hardware IRQs, we must find vectors that do not collide with the set of
hardware IRQs or processor exceptions.
To avoid this dependence on fixed assumtions about vector layout, what
is needed is a mechanism to reserve and allocate software IDT vectors.
It may be a GPL'd interface; it certainly is interfacing with the kernel
at a low-level.
An interface that would likely looking something like:
int idt_allocate_swirq(int best_irq);
void idt_release_swirq(int irq);
int __init idt_reserve_irqs(int count);
void idt_set_swirq_handler (int irq, int is_user, void (*handle)(struct
pt_regs *regs, unsigned long error_code));
EXPORT_SYMBOL_GPL(idt_allocate_swirq);
EXPORT_SYMBOL_GPL(idt_release_swirq);
EXPORT_SYMBOL_GPL(idt_set_swirq_handler);
Now you can set aside a fixed number of IRQs to be used for software
IRQs at boot time, and allocate them as required. You can even create
software IRQs which can be handled by userspace applications, or reserve
software IRQs for other uses - from within the kernel itself, or from
outside any kernel context (for example an IPI invoked from a non-kernel
CPU). There are cases where this would be a useful feature for us;
being able to issue IPIs directly to a hypervisor mode CPU would be a
significant speedup (alternatively, having a kernel module handle the
IPI when the CPU is in kernel mode and schedule the vmx process to run
and forward the IPI).
The thought of running non-kernel code in ring-0 on some CPU is scary,
certainly. Nevertheless it is required for running a hypervisor which
does not live in the kernel address space and must handle its own page
faults and other exceptions.
Zach
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]