/datacenter-nvme-ssd-specification-v2-6-1-pdf
[3]: https://nvmexpress.org/specifications/
Signed-off-by: Stephen Bates
Co-developed-by: Joel Granados
---
docs/system/devices/nvme.rst | 7 +
hw/nvme/ctrl.c | 59
hw/nvme/nvme.h | 1
/datacenter-nvme-ssd-specification-v2-6-1-pdf
[3]: https://nvmexpress.org/specifications/
Signed-off-by: Stephen Bates
Co-developed-by: Joel Granados
---
docs/system/devices/nvme.rst | 7 +
hw/nvme/ctrl.c | 59
hw/nvme/nvme.h | 1
>Forgot to mention that, despite showing up, the device doesn't quite
>work: for example, it can't seem to be able to acquire an IP address.
Andrea and Alistair
A lot of your issues look very similar to what I saw. The PCIe device can be
accessed via MMIO but interrupts are broken wh
Hi All
> because that's the one that makes PCI_HOST_GENERIC available to
> non-ARM architectures.
You are going to need PCI_HOST_GENERIC for sure to get the drive for the GPEX
host PCIe port. I did my testing on a 4.19 based kernel [1] and the .config is
here [2] (sorry the link I sent for the
> I added e1000 and e1000e support to my kernel and changed the QEMU command to:
So using -device e1000e rather than -device e1000 seems to work. I am not sure
why -device e1000 causes a kernel panic. The MSI-X message is interesting and
may be related to why NVMe interrupts are not reaching the
>Why do you need two networking options?
I don't need the e1000 for networking. The e1000 option is there to test the
PCIe since it implements a PCIe model of the e1000 NIC. Basically it's another
test path for your PCIe patches and was used for testing when PCIe support to
the arm virt mod
>> I plan to also try with a e1000 network interface model tomorrow and see how
>> that behaves
>
>Please do :)
I added e1000 and e1000e support to my kernel and changed the QEMU command to:
$QEMU -nographic \
-machine virt \
-smp 1 -m 8G \
-append "console=hvc0
>So it looks like you at least got to the point where the guest OS
>would find PCIe devices...
Yes and in fact NVMe IO against those devices do succeed (I can write and read
the NVMe namespaces). It is just slow because the interrupts are not getting to
the OS and hence NVMe timeouts are
Andrea and Alistair
+Keith since he developed a lot of the NVMe drive model
>> Alistair Francis (5):
>> hw/riscv/virt: Increase the number of interrupts
>> hw/riscv/virt: Connect the gpex PCIe
>> riscv: Enable VGA and PCIE_VGA
>> hw/riscv/sifive_u: Connect the Xilinx P
> This mail seems to have made it to the list, but the patch hasn't.
> Looking at the headers, you don't seem to use the same server for
> your normal mail client and git send-email. Maybe it's related to this?
My patches come through my gmail smtp client (via git send-email) so maybe
that’s a p
> Awesome, this looks great!
>
> Acked-by: Keith Busch
Thanks Keith!
I still seem to be having issues getting my patches onto the qemu-* mailing
lists. Does anyone have any idea how I go about rectifying that?
Stephen
is less reliable than DRAM...
Cheers
Stephen
> On Sep 12, 2016, at 7:28 PM, Fam Zheng wrote:
>
>> On Mon, 09/12 16:23, Stephen Bates wrote:
>> Hi
>
> Hi Stephen,
>
>>
>> I sent this to qemu-discuss with no success so resending to qemu-devel.
>>
of memory in this case.
Cheers
Stephen Bates
13 matches
Mail list logo