Hi David, I am not an expert here, but I don't believe it would work without changes to KVM. My understanding is that you use an IOMMU in this fashion if you want to direct-map a device into a guest for devices that do not have a local IOMMU-like functionality built in already. For instance, perhaps you want to assign an off-the-shelf ethernet NIC to a guest. The IOMMU would serve to translate between GPA and system based DMA addresses. However, the hypervisor would really need to be involved in the setup of this mapping on the IOMMU in the first place.
Okay its understandable that the initial setup of the mapping between virtual and actual would be done by some OS (most-likely host). However isn't the actual mapping when the guest starts and requests devices supposed to be handled by hardware? I would think performance wouldn't scale very well if the host OS had to maintain mappings and translate addresses every time a guest requests access to a mapped device.
KVM (currently) virtualizes/emulates all components in the logical "system" presented to the guest. It doesn't yet support the notion of direct-mapping a physical component. I doubt you will have to wait too long for someone to add this feature, however :) It's just not there today (to my knowledge, anyway)
That's good to hear. :)
But to answer your question, when configured up like this the IO subsystem in question should perform pretty close to native (at least in theory).
Hopefully you mean that the hardware is handling the mapping so that the host OS won't have to take the burden of mapping a bunch of addresses all the time. Thanks, - David Brown - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/