Hi All, I am using iommu to receive packets both from hypervisor and from VM. KVM is used for the virtualization. However, after I deliver the kernel options (iommu and pci realloc), I can not receive packets in hypervisor, but VF works fine in VM. When I tried to receive packets in hypervisor, dmesg shows the following:
ixgbe 0000:03:00.1: complete ixgbe 0000:03:00.1: PCI INT A disabled igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38 igb_uio 0000:03:00.1: setting latency timer to 64 igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X uio device registered with irq 57 DRHD: handling fault status reg 2 DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000* DMAR:[fault reason 02] Present bit in context entry is clear 03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port Backplane Connection (rev 01) Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR Mezz Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 38 Region 0: Memory at *d9400000* (64-bit, prefetchable) [size=4M] Region 2: I/O ports at ece0 [size=32] Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K] Expansion ROM at <ignored> [disabled] Capabilities: <access denied> Kernel driver in use: igb_uio Kernel modules: ixgbe We can see those addresses are not matched. So the kernel got fault. I am wondering why this happens? One suspicion for this is BIOS. I am currently using BIOS version 3.0, but the latest is 6.3.0. Does this affect the matter? Any help appreciated! Jinho -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://dpdk.org/ml/archives/dev/attachments/20130813/665b72e9/attachment.html>