The subject is a bit confusing - it's not the full request parsing but
just some helpers.
> +static int scsi_req_length(SCSIRequest *req, uint8_t *cmd)
> +{
> +switch (cmd[0] >> 5) {
I know qemu code tends to be very uncommented and the code this is
lifted from too, but some comments on how t
On Tue, Nov 17, 2009 at 11:17:46AM +0100, Gerd Hoffmann wrote:
>
> Signed-off-by: Gerd Hoffmann
Looks good,
Reviewed-by: Christoph Hellwig
On Tue, Nov 17, 2009 at 11:17:47AM +0100, Gerd Hoffmann wrote:
> +static void scsi_req_xfer_mode(SCSIRequest *req)
> +{
> +switch (req->cmd.buf[0]) {
Having this as a void seem a bit odd to me. I'd make it return the
mode, and maybe just pass the cmd to it to make it more clear.
> +static in
Looks good. Long term we might want to make this a pointer to be able
to deal with block format sense descriptors more nicely.
On Tue, Nov 17, 2009 at 11:17:49AM +0100, Gerd Hoffmann wrote:
>
> Signed-off-by: Gerd Hoffmann
Looks good,
Reviewed-by: Christoph Hellwig
On Tue, Nov 17, 2009 at 11:17:50AM +0100, Gerd Hoffmann wrote:
> Also add and use the scsi_req_complete() helper function for calling the
> completion callback.
>
> Signed-off-by: Gerd Hoffmann
Look good,
Reviewed-by: Christoph Hellwig
On Tue, Nov 17, 2009 at 11:17:51AM +0100, Gerd Hoffmann wrote:
> Handy for debugging.
Yes, nice one, but what about getting rid of the ad-hoc printfs in
scsi-disk, too?
It seems like this series is now in qemu.git, but I still can't boot
using -kernel.
I'm starting qemu as:
/opt/qemu/bin/qemu-system-x86_64 \
-m 1500 -enable-kvm \
-kernel arch/x86/boot/bzImage \
-drive
file=/dev/vg00/qemu-root,if=virtio,media=disk,cache=none,aio=threads
Looks good,
Reviewed-by: Christoph Hellwig
On Wed, Nov 18, 2009 at 12:15:10PM +0100, Kevin Wolf wrote:
> Checking for nbytes < 0 is pointless as long as it's a size_t. If we want to
> use negative numbers for error codes, we should use signed types.
Indeed, patch looks good.
Reviewed-by: Christoph Hellwig
On Wed, Nov 18, 2009 at 01:55:48PM -0600, Anthony Liguori wrote:
> Did you rebuild qemu and make sure the new BIOS/roms were installed?
>
> >and it simply hangs with a black screen once the SDL window opens
> >
>
> I had this problem because I had not rebuilt qemu.
I did a make clean; ./config
On Wed, Nov 18, 2009 at 04:06:34PM -0600, Anthony Liguori wrote:
> I assume you set prefix with your configure as opposed to make install
> DESTDIR?
Yes. It's configured the following way:
./configure \
--target-list=x86_64-softmmu \
--kerneldir=/home/hch/work/linux-2.6 \
On Fri, Nov 20, 2009 at 11:53:41AM +0100, Alexander Graf wrote:
> Works great here:
>
> ./x86_64-softmmu/qemu-system-x86_64 -nographic -kernel ../bzImage -
> append console=ttyS0 -L pc-bios
>
> Are you sure you also have the follow-up linuxboot patch applied? The
> one "fixing BOCHS bios suppo
On Mon, Mar 11, 2019 at 09:11:53AM -0600, Keith Busch wrote:
> The implementation used blocks units rather than the expected bytes.
Thank,
looks good:
Reviewed-by: Christoph Hellwig
And sorry for causing this mess.
On Fri, Dec 13, 2019 at 02:46:26PM +, Stefan Hajnoczi wrote:
> The Linux virtio_blk.ko guest driver is removing legacy SCSI passthrough
> support. Deprecate this feature in QEMU too.
>
> Signed-off-by: Stefan Hajnoczi
Fine with me as the original author:
Reviewed-by: Christoph Hellwig
Hi all,
qemu 7.2.0 fails to boot my usual test setup using -kernel (see
the actual script below). I've bisected this down to:
commit ffe2d2382e5f1aae1abc4081af407905ef380311
Author: Jason A. Donenfeld
Date: Wed Sep 21 11:31:34 2022 +0200
x86: re-enable rng seeding via SetupData
with thi
On Tue, Nov 16, 2021 at 10:58:30AM +, Stefan Hajnoczi wrote:
> Question for Jens and Christoph:
>
> Is there a way for userspace to detect whether a Linux block device
> supports SECDISCARD?
I don't know of one.
> If not, then maybe a new sysfs attribute can be added:
This looks correct, bu
On Mon, Jun 27, 2022 at 01:47:28PM +0200, Niklas Cassel via wrote:
> CRMS.CRWMS bit shall be set to 1 on controllers compliant with versions
> later than NVMe 1.4.
>
> The first version later than NVMe 1.4 is NVMe 2.0
>
> Let's claim compliance with NVMe 2.0 such that a follow up patch can
> set
; `eui64=UINT64`.
Looks good:
Reviewed-by: Christoph Hellwig
VM launch, it is not spec compliant and is of
> little use since the UUID cannot be used reliably anyway and the
> behavior prior to this patch must be considered buggy.
>
> Reviewed-by: Keith Busch
> Signed-off-by: Klaus Jensen
Looks good:
Reviewed-by: Christoph Hellwig
On Thu, Sep 29, 2022 at 10:37:22AM -0600, Keith Busch wrote:
> I don't think so. Memory alignment and length granularity are two completely
> different concepts. If anything, the kernel's ABI had been that the length
> requirement was also required for the memory alignment, not the other way
> arou
Please don't do this. OCP is acting as a counter standard to the
proper NVMe standard here and should in absolutely no way be supported
by open source projects that needs to stick to the actual standards.
Please work with the NVMe technical working group to add this (very
useful) functionality to
Signed-off-by: Klaus Jensen
Looks good:
Reviewed-by: Christoph Hellwig
On Tue, Apr 19, 2022 at 02:10:36PM +0200, Klaus Jensen wrote:
> From: Klaus Jensen
>
> Unconditionally set an EUI64 for namespaces. The nvme-ns device defaults
> to auto-generating a persistent EUI64 if not specified, but for single
> namespace setups (-device nvme,drive=...), this does not happe
On Tue, Apr 19, 2022 at 02:10:38PM +0200, Klaus Jensen wrote:
> From: Klaus Jensen
>
> Do not default to generate an UUID for namespaces if it is not
> explicitly specified.
>
> This is a technically a breaking change in behavior. However, since the
> UUID changes on every VM launch, it is not s
Looks good:
Reviewed-by: Christoph Hellwig
On Wed, Apr 20, 2022 at 07:51:32AM +0200, Klaus Jensen wrote:
> > So unlike the EUI, UUIDs are designed to be autogenerated even if the
> > current algorithm is completely broken. We'd just need to persist them.
> > Note that NVMe at least in theory requires providing at least on of
> > the unique
On Thu, Sep 21, 2023 at 02:41:54PM +0530, Kishon Vijay Abraham I wrote:
> > PCI Endpoint function driver is implemented using the PCIe Endpoint
> > framework, but it requires physical boards for testing, and it is difficult
> > to test sufficiently. In order to find bugs and hardware-dependent
> >
On Wed, Jul 12, 2023 at 10:28:00AM +0200, Stefano Garzarella wrote:
> The problem is that the SCSI stack does not send this command, so we
> should do it in the driver. In fact we do it for
> VIRTIO_SCSI_EVT_RESET_RESCAN (hotplug), but not for
> VIRTIO_SCSI_EVT_RESET_REMOVED (hotunplug).
No, you s
Looks good,
Reviewed-by: Christoph Hellwig
bug fix.
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index a0655a3..cef3bb4 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -954,7 +954,7 @@ static void nvme_class_init
NVMe 1.1 requires devices to implement a Namespace List subcommand of
the identify command. Qemu not only not implements this features, but
also misinterprets it as an Identify Controller request. Due to this
any OS trying to use the Namespace List will fail the probe.
Signed-off-by: Christoph
Third resent of this series after this didn't get picked up the
previous times. The Qemu NVMe implementation mistakes the cns
field in the Identify command as a boolean. This was never
true, and is actively harmful since NVMe1.1 (which the Qemu
device claims to support) supports more than two Ide
From: Keith Busch
Signed-off-by: Keith Busch
[hch: ported over from qemu-nvme.git to mainline]
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 27 ++-
hw/block/nvme.h | 1 +
2 files changed, 27 insertions(+), 1 deletion(-)
diff --git a/hw/block/nvme.c b/hw
Hi all,
this series implements two more NVMe commands: DSM and Write Zeroes.
Both trace their lineage to Keith's qemu-nvme.git repository, and
while the Write Zeroes one is taken from there almost literally
the DSM one has seen a major rewrite to not block the main thread
as well as various other
infrastructure properly to not block
the main thread on discard requests, and cleaned up a little bit.
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 87 +
hw/block/nvme.h | 1 +
2 files changed, 88 insertions(+)
diff --git a/hw/block/nvme.c b
Signed-off-by: Keith Busch
[hch: ported over from qemu-nvme.git to mainline]
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 26 ++
hw/block/nvme.h | 1 +
2 files changed, 27 insertions(+)
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index ae303d44e5
On Fri, May 05, 2017 at 11:30:11AM +0200, Paolo Bonzini wrote:
> could you pass BDRV_REQ_MAY_UNMAP for the flags here if the deallocate
> bit (dword 12 bit 25) is set?
In fact we should do that unconditonally. The deallocate bit is new
in 1.3 (which we don't claim to support) and forces deallocat
Signed-off-by: Keith Busch
[hch: ported over from qemu-nvme.git to mainline]
Signed-off-by: Christoph Hellwig
---
hw/block/nvme.c | 26 ++
hw/block/nvme.h | 1 +
2 files changed, 27 insertions(+)
Changes since v1:
- add BDRV_REQ_MAY_UNMAP flag
diff --git a/hw/block
On Fri, May 05, 2017 at 12:03:40PM +0200, Paolo Bonzini wrote:
> While that's allowed and it makes sense indeed on SSDs, for QEMU's
> typical usage it can lead to fragmentation and worse performance. On
> extent-based file systems, write zeroes without deallocate can be
> implemented very efficien
On Sun, Jul 31, 2016 at 04:52:16PM -0700, Ashish Mittal wrote:
> This patch adds support for a new block device type called "vxhs".
> Source code for the library that this code loads can be downloaded from:
> https://github.com/MittalAshish/libqnio.git
Do you also have a pointer to the server impl
On Tue, Jun 06, 2017 at 03:38:05PM +0800, Qu Wenruo wrote:
> Update nvme header to catch up with kernel.
> Most of the newly added members are from 1.2 and 1.3 spec, while the
> status code is only kept the same with kernel (around 1.1 spec).
>
> The major update is to add Scatter Gather List rela
Can you send a patch with just the PSDT flag check? The rest should
only be in an eventually patch to add SGL support.
This didn't seem to make it into mainline, does it need a ping?
On Thu, Nov 23, 2017 at 03:02:05PM +0100, Marc-André Lureau wrote:
> The following patch is going to use the symbol from the fw_cfg module,
> to call the function and write the note location details in the
> vmcoreinfo entry, so qemu can produce dumps with the vmcoreinfo note.
Sounds like fw_cfg s
On Tue, Mar 28, 2017 at 04:39:25PM +0800, Changpeng Liu wrote:
> Currently virtio-blk driver does not provide discard feature flag, so the
> filesystems which built on top of the block device will not send discard
> command. This is okay for HDD backend, but it will impact the performance
> for SSD
On Tue, Oct 16, 2018 at 11:42:35PM +0530, Kirti Wankhede wrote:
> - Added vfio_device_migration_info structure to use interact with vendor
> driver.
There is no such thing as a 'vendor driver' in Linux - all drivers ate
treated equal. And I don't see any single driver supporting this yet,
so yo
I think this driver is at entirely the wrong level.
If you want to expose pmem to a guest with flushing assist do it
as pmem, and not a block driver.
On Tue, Oct 17, 2017 at 03:40:56AM -0400, Pankaj Gupta wrote:
> Are you saying do it as existing i.e ACPI pmem like interface?
> The reason we have created this new driver is exiting pmem driver
> does not define proper semantics for guest flushing requests.
At this point I'm caring about the Linu
On Wed, Oct 18, 2017 at 08:51:37AM -0700, Dan Williams wrote:
> This use case is not "Persistent Memory". Persistent Memory is
> something you can map and make persistent with CPU instructions.
> Anything that requires a driver call is device driver managed "Shared
> Memory".
How is this any diffe
On Thu, Oct 19, 2017 at 11:21:26AM -0700, Dan Williams wrote:
> The difference is that nvdimm_flush() is not mandatory, and that the
> platform will automatically perform the same flush at power-fail.
> Applications should be able to assume that if they are using MAP_SYNC
> that no other coordinati
On Fri, Oct 20, 2017 at 08:05:09AM -0700, Dan Williams wrote:
> Right, that's the same recommendation I gave.
>
> https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg08404.html
>
> ...so maybe I'm misunderstanding your concern? It sounds like we're on
> the same page.
Yes, the above is
> The RISC-V QEMU port implements the following specifications:
> - RISC-V Instruction Set Manual Volume I: User-Level ISA Version 2.2
> - RISC-V Instruction Set Manual Volume II: Privileged ISA Version 1.9.1
> - RISC-V Instruction Set Manual Volume II: Privileged ISA Version 1.10
What is the reas
> +if (env->priv_ver >= PRIV_VERSION_1_10_0) {
> +if (get_field(env->satp, SATP_MODE) == VM_1_09_MBARE) {
> +mode = PRV_M;
> +}
> +} else {
> +if (get_field(env->mstatus, MSTATUS_VM) == VM_1_10_MBARE) {
> +mode = PRV_M;
> +}
> +}
On Wed, Jan 03, 2018 at 01:44:15PM +1300, Michael Clark wrote:
> HTIF (Host Target Interface) provides console emulation for QEMU. HTIF
> allows identical copies of BBL (Berkeley Boot Loader) and linux to run
> on both Spike and QEMU. BBL provides HTIF console access via the
> SBI (Supervisor Binar
On Wed, Jan 10, 2018 at 03:46:19PM -0800, Michael Clark wrote:
> - RISC-V Instruction Set Manual Volume I: User-Level ISA Version 2.2
> - RISC-V Instruction Set Manual Volume II: Privileged ISA Version 1.9.1
> - RISC-V Instruction Set Manual Volume II: Privileged ISA Version 1.10
Same question as
#ifdef CONFIG_USER_ONLY
int riscv_cpu_mmu_index(CPURISCVState *env, bool ifetch)
{
return 0;
}
bool riscv_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
{
return false;
}
int riscv_cpu_handle_mmu_fault(CPUState *cs, vaddr address,
int access_type, int mmu_idx)
{
cs->e
On Fri, Jan 12, 2018 at 07:24:54AM +1300, Michael Clark wrote:
> I'm going to be restoring branches for bbl and riscv-linux that work again
> priv 1.9.1. There are still other emulators and RTL that support priv1.9.1.
> Folk will have silicon against different versions of spec going forward.
> Like
On Mon, Feb 05, 2018 at 09:19:46AM +1300, Michael Clark wrote:
> BTW I've created branches in my own personal trees for Privileged ISA
> v1.9.1. These trees are what I use for v1.9.1 backward compatibility
> testing in QEMU:
>
> - https://github.com/michaeljclark/riscv-linux/tree/riscv-linux-4.6.2
s/KABI/UAPI/ in the subject and anywhere else in the series.
Please avoid __packed__ structures and just properly pad them, they
have a major performance impact on some platforms and will cause
compiler warnings when taking addresses of members.
On Wed, Apr 22, 2020 at 01:14:44PM -0400, Jon Derrick wrote:
> The two patches (Linux & QEMU) add support for passthrough VMD devices
> in QEMU/KVM. VMD device 28C0 already supports passthrough natively by
> providing the Host Physical Address in a shadow register to the guest
> for correct bridge
On Tue, May 04, 2021 at 02:59:07PM +0200, Greg Kroah-Hartman wrote:
> > Hi Christoph,
> >
> > FYI, these uapi changes break build of QEMU.
>
> What uapi changes?
>
> What exactly breaks?
>
> Why does QEMU require kernel driver stuff?
Looks like it pull in the uapi struct definitions unconditio
On Fri, Nov 01, 2019 at 04:25:10PM +0100, Max Reitz wrote:
> The XFS kernel driver has a bug that may cause data corruption for qcow2
> images as of qemu commit c8bb23cbdbe32f. We can work around it by
> treating post-EOF fallocates as serializing up until infinity (INT64_MAX
> in practice).
This
On Mon, Jul 12, 2021 at 12:03:27PM +0100, Stefan Hajnoczi wrote:
> Why did you decide to implement -device nvme-mi as a device on
> TYPE_NVME_BUS? If the NVMe spec somehow requires this then I'm surprised
> that there's no NVMe bus interface (callbacks). It seems like this could
> just as easily be
On Thu, Apr 18, 2019 at 09:05:05AM -0700, Dan Williams wrote:
> > > I'd either add a comment about avoiding retpoline overhead here or just
> > > make ->flush == NULL mean generic_nvdimm_flush(). Just so that people
> > > don't
> > > get confused by the code.
> >
> > Isn't this premature optimizat
701 - 765 of 765 matches
Mail list logo