Re: [Qemu-discuss] [Qemu-devel] Virtio-9p

2016-03-30 Thread Greg Kurz
On Wed, 30 Mar 2016 14:10:38 +0200
Pradeep Kiruvale  wrote:

> Hi All,
> 
> Is virtio-9p-pci device only supports the fsdev deices? I am trying to use
> -drive option for applying QoS for block device using Virtio-9p-pci device,
> but failing to create/add a device other than fsdev. Can you please help me
> on this?
> 
> Regards,
> Pradeep

Hi Pradeep,

Not sure to catch what you want to do but I confirm that virti-9p-pci only 
supports
fsdev... if you want a block device, why don't you use virtio-blk-pci ?

Cheers.

--
Greg




Re: [Qemu-discuss] [Qemu-devel] Virtio-9p

2016-03-31 Thread Greg Kurz
On Wed, 30 Mar 2016 16:27:48 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 

Hi Pradeep,

> Thanks for the reply.
> 
> Let me put it this way, virtio-blk-pci is used for block IO on the devices
> shared between the guest and the host.

I don't really understand the "devices shared between the guest and the
host" wording... virtio-blk-pci exposes a virtio-blk device through PCI
to the guest. The virtio-blk device can be backed by a file or a block
device from the host.

> Here I want to share the file and have QoS between the guests. So I am
> using the Virtio-9p-pci.
> 

What file ?

> Basically I want to have QoS for virtio-9p-pci.
> 

Can you provide a more detailed scenario on the result you want to reach ?

> Regards,
> Pradeep
> 

Cheers.

--
Greg

> On 30 March 2016 at 16:13, Greg Kurz  wrote:
> 
> > On Wed, 30 Mar 2016 14:10:38 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > Hi All,
> > >
> > > Is virtio-9p-pci device only supports the fsdev deices? I am trying to
> > use
> > > -drive option for applying QoS for block device using Virtio-9p-pci
> > device,
> > > but failing to create/add a device other than fsdev. Can you please help
> > me
> > > on this?
> > >
> > > Regards,
> > > Pradeep
> >
> > Hi Pradeep,
> >
> > Not sure to catch what you want to do but I confirm that virti-9p-pci only
> > supports
> > fsdev... if you want a block device, why don't you use virtio-blk-pci ?
> >
> > Cheers.
> >
> > --
> > Greg
> >
> >




Re: [Qemu-discuss] [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Thu, 7 Apr 2016 11:48:27 +0200
Pradeep Kiruvale  wrote:

> Hi All,
> 
> I am using virtio-9p for sharing the file between host and guest. To test
> the shared file I do read/write options in the guest.To have controlled io,
> I am using cgroup blkio.
> 
> While using cgroup I am facing two issues,Please find the issues below.
> 
> 1. When I do IO throttling using the cgroup the read throttling works fine
> but the write throttling does not wok. It still bypasses these throttling
> control and does the default, am I missing something here?
> 

Hi,

Can you provide details on your blkio setup ?

> I use the following commands to create VM, share the files and to
> read/write from guest.
> 
> *Create vm*
> qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128 -smp 1
> -enable-kvm -parallel  -fsdev
> local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate -device
> virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> 
> *Mount file*
> mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log &&
> sync
> 
> touch /sdb1_ext4/dddrive
> 
> *Write test*
> dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80 oflag=direct >>
> dd.log 2>&1 && sync
> 
> *Read test*
> dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> 
> 2. The other issue is when I run "dd" command inside guest  it creates
> multiple threads to write/read. I can see those on host using iotop is this
> expected behavior?
> 

Yes. QEMU uses a thread pool to handle 9p requests.

> Regards,
> Pradeep

Cheers.

--
Greg




Re: [Qemu-discuss] [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Fri, 8 Apr 2016 11:51:05 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 
> Thanks for your reply.
> 
> Below is the way how I add to blkio
> 
> echo "8:16 8388608" >
> /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> 

Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
tasks in the test cgroup... but what about the tasks themselves ?

> The problem I guess is adding these task ids to the "tasks" file in cgroup
> 

Exactly. :)

> These threads are started randomly and even then I add the PIDs to the
> tasks file the cgroup still does not do IO control.
> 

How did you get the PIDs ? Are you sure these threads you have added to the
cgroup are the ones that write to /dev/sdb ?

> Is it possible to reduce these number of threads? I see different number of
> threads doing IO at different runs.
> 

AFAIK, no.

Why don't you simply start QEMU in the cgroup ? Unless I miss something, all
children threads, including the 9p ones, will be in the cgroup and honor the
throttle setttings.

> Regards,
> Pradeep
> 

Cheers.

--
Greg

> 
> On 8 April 2016 at 10:10, Greg Kurz  wrote:
> 
> > On Thu, 7 Apr 2016 11:48:27 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > Hi All,
> > >
> > > I am using virtio-9p for sharing the file between host and guest. To test
> > > the shared file I do read/write options in the guest.To have controlled
> > io,
> > > I am using cgroup blkio.
> > >
> > > While using cgroup I am facing two issues,Please find the issues below.
> > >
> > > 1. When I do IO throttling using the cgroup the read throttling works
> > fine
> > > but the write throttling does not wok. It still bypasses these throttling
> > > control and does the default, am I missing something here?
> > >
> >
> > Hi,
> >
> > Can you provide details on your blkio setup ?
> >
> > > I use the following commands to create VM, share the files and to
> > > read/write from guest.
> > >
> > > *Create vm*
> > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128 -smp 1
> > > -enable-kvm -parallel  -fsdev
> > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > -device
> > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > >
> > > *Mount file*
> > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log &&
> > > sync
> > >
> > > touch /sdb1_ext4/dddrive
> > >
> > > *Write test*
> > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80 oflag=direct >>
> > > dd.log 2>&1 && sync
> > >
> > > *Read test*
> > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > >
> > > 2. The other issue is when I run "dd" command inside guest  it creates
> > > multiple threads to write/read. I can see those on host using iotop is
> > this
> > > expected behavior?
> > >
> >
> > Yes. QEMU uses a thread pool to handle 9p requests.
> >
> > > Regards,
> > > Pradeep
> >
> > Cheers.
> >
> > --
> > Greg
> >
> >




Re: [Qemu-discuss] [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Fri, 8 Apr 2016 14:55:29 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 
> FInd my replies inline
> 
> >
> > > Below is the way how I add to blkio
> > >
> > > echo "8:16 8388608" >
> > > /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> > >
> >
> > Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
> > tasks in the test cgroup... but what about the tasks themselves ?
> >
> > > The problem I guess is adding these task ids to the "tasks" file in
> > cgroup
> > >
> >
> > Exactly. :)
> >
> > > These threads are started randomly and even then I add the PIDs to the
> > > tasks file the cgroup still does not do IO control.
> > >
> >
> > How did you get the PIDs ? Are you sure these threads you have added to the
> > cgroup are the ones that write to /dev/sdb ?
> >
> 
> *Yes, I get PIDs from /proc/Qemu_PID/task*
> 

And then you echoed the PIDs to /sys/fs/cgroup/blkio/test/tasks ?

This is racy... another IO thread may be started to do some work on /dev/sdb
just after you've read PIDs from /proc/Qemu_PID/task, and it won't be part
of the cgroup.

> 
> 
> >
> > > Is it possible to reduce these number of threads? I see different number
> > of
> > > threads doing IO at different runs.
> > >
> >
> > AFAIK, no.
> >
> > Why don't you simply start QEMU in the cgroup ? Unless I miss something,
> > all
> > children threads, including the 9p ones, will be in the cgroup and honor
> > the
> > throttle setttings.
> >
> 
> 
> *I started the qemu with cgroup as below*
> 
> *cgexec -g blkio:/test qemu...*
> *Is there any other way of starting the qemu in cgroup?*
> 

Maybe you can pass --sticky to cgexec to prevent cgred from moving
children tasks to other cgroups...

There's also the old fashion method:

# echo $$ > /sys/fs/cgroup/blkio/test/tasks
# qemu.

This being said, QEMU is a regular userspace program that is completely cgroup
agnostic. It won't behave differently than 'dd if=/dev/sdb of=/dev/null'.

This really doesn't look like a QEMU related issue to me.

> Regards,
> Pradeep
> 

Cheers.

--
Greg

> 
> >
> > > Regards,
> > > Pradeep
> > >
> >
> > Cheers.
> >
> > --
> > Greg
> >
> > >
> > > On 8 April 2016 at 10:10, Greg Kurz  wrote:
> > >
> > > > On Thu, 7 Apr 2016 11:48:27 +0200
> > > > Pradeep Kiruvale  wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I am using virtio-9p for sharing the file between host and guest. To
> > test
> > > > > the shared file I do read/write options in the guest.To have
> > controlled
> > > > io,
> > > > > I am using cgroup blkio.
> > > > >
> > > > > While using cgroup I am facing two issues,Please find the issues
> > below.
> > > > >
> > > > > 1. When I do IO throttling using the cgroup the read throttling works
> > > > fine
> > > > > but the write throttling does not wok. It still bypasses these
> > throttling
> > > > > control and does the default, am I missing something here?
> > > > >
> > > >
> > > > Hi,
> > > >
> > > > Can you provide details on your blkio setup ?
> > > >
> > > > > I use the following commands to create VM, share the files and to
> > > > > read/write from guest.
> > > > >
> > > > > *Create vm*
> > > > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128
> > -smp 1
> > > > > -enable-kvm -parallel  -fsdev
> > > > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > > > -device
> > > > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > > > >
> > > > > *Mount file*
> > > > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4
> > 2>>dd.log &&
> > > > > sync
> > > > >
> > > > > touch /sdb1_ext4/dddrive
> > > > >
> > > > > *Write test*
> > > > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80
> > oflag=direct >>
> > > > > dd.log 2>&1 && sync
> > > > >
> > > > > *Read test*
> > > > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > > > >
> > > > > 2. The other issue is when I run "dd" command inside guest  it
> > creates
> > > > > multiple threads to write/read. I can see those on host using iotop
> > is
> > > > this
> > > > > expected behavior?
> > > > >
> > > >
> > > > Yes. QEMU uses a thread pool to handle 9p requests.
> > > >
> > > > > Regards,
> > > > > Pradeep
> > > >
> > > > Cheers.
> > > >
> > > > --
> > > > Greg
> > > >
> > > >
> >
> >




Re: [Qemu-discuss] [Qemu-devel] iolimits for virtio-9p

2016-04-27 Thread Greg Kurz
On Wed, 27 Apr 2016 16:39:58 +0200
Pradeep Kiruvale  wrote:

> On 27 April 2016 at 10:38, Alberto Garcia  wrote:
> 
> > On Wed, Apr 27, 2016 at 09:29:02AM +0200, Pradeep Kiruvale wrote:
> >
> > > Thanks for the reply. I am still in the early phase, I will let you
> > > know if any changes are needed for the APIs.
> > >
> > > We might also have to implement throttle-group.c for 9p devices, if
> > > we want to apply throttle for group of devices.
> >
> > Fair enough, but again please note that:
> >
> > - throttle-group.c is not meant to be generic, but it's tied to
> >   BlockDriverState / BlockBackend.
> > - it is currently being rewritten:
> >   https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00645.html
> >
> > If you can explain your use case with a bit more detail we can try to
> > see what can be done about it.
> >
> >
> We want to use  virtio-9p for block io instead of virtio-blk-pci. But in
> case of

9p is mostly aimed at sharing files... why would you want to use it for
block io instead of a true block device ? And how would you do that ?

> virtio-9p we can just use fsdev devices, so we want to apply throttling
> (QoS)
> on these devices and as of now the io throttling only possible with the
> -drive option.
> 

Indeed.

> As a work around we are doing the throttling using cgroup. It has its own
> costs.

Can you elaborate ?

> So, we want to have throttling for fsdev devices inside the qemu itself. I
> am just
> trying to understand and estimate time required for implementing it for the
> fsdevices.
> 

I still don't clearly understand what you are trying to do... maybe provide
a more detailed scenario.

> 
> -Pradeep

Cheers.

--
Greg




Re: [Qemu-discuss] [Qemu-devel] iolimits for virtio-9p

2016-05-02 Thread Greg Kurz
On Thu, 28 Apr 2016 11:45:41 +0200
Pradeep Kiruvale  wrote:

> On 27 April 2016 at 19:12, Greg Kurz  wrote:
> 
> > On Wed, 27 Apr 2016 16:39:58 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > On 27 April 2016 at 10:38, Alberto Garcia  wrote:
> > >
> > > > On Wed, Apr 27, 2016 at 09:29:02AM +0200, Pradeep Kiruvale wrote:
> > > >
> > > > > Thanks for the reply. I am still in the early phase, I will let you
> > > > > know if any changes are needed for the APIs.
> > > > >
> > > > > We might also have to implement throttle-group.c for 9p devices, if
> > > > > we want to apply throttle for group of devices.
> > > >
> > > > Fair enough, but again please note that:
> > > >
> > > > - throttle-group.c is not meant to be generic, but it's tied to
> > > >   BlockDriverState / BlockBackend.
> > > > - it is currently being rewritten:
> > > >   https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00645.html
> > > >
> > > > If you can explain your use case with a bit more detail we can try to
> > > > see what can be done about it.
> > > >
> > > >
> > > We want to use  virtio-9p for block io instead of virtio-blk-pci. But in
> > > case of
> >
> > 9p is mostly aimed at sharing files... why would you want to use it for
> > block io instead of a true block device ? And how would you do that ?
> >
> 
> *Yes, we want to share the files itself. So we are using the virtio-9p.*

You want to pass a disk image to the guest as a plain file on a 9p mount ?
And then, what do you do in the guest ? Attach it to a loop device ?

> *We want to have QoS on these files access for every VM.*
> 

You won't be able to have QoS on selected files, but it may be possible to
introduce limits at the fsdev level: control all write accesses to all files
and all read accesses to all files for a 9p device.

> 
> >
> > > virtio-9p we can just use fsdev devices, so we want to apply throttling
> > > (QoS)
> > > on these devices and as of now the io throttling only possible with the
> > > -drive option.
> > >
> >
> > Indeed.
> >
> > > As a work around we are doing the throttling using cgroup. It has its own
> > > costs.
> >
> > Can you elaborate ?
> >
> 
> *We saw that we need to create cgroups and set it and also we observed lot
> of iowaits *
> *compared to implementing the throttling inside the qemu.*
> *This we did observe by using the virtio-blk-pci devices. (Using cgroups Vs
> qemu throttling)*
> 

Just to be sure I get it right.

You tried both:
1) run QEMU with -device virtio-blk-pci and -drive throttling.*
2) run QEMU with -device virtio-blk-pci in its own cgroup

And 1) has better performance and is easier to use than 2) ?

And what do you expect with 9p compared to 1) ?

> 
> Thanks,
> Pradeep




Re: [Qemu-discuss] [Qemu-devel] iolimits for virtio-9p

2016-05-04 Thread Greg Kurz
On Mon, 2 May 2016 17:49:26 +0200
Pradeep Kiruvale  wrote:

> On 2 May 2016 at 14:57, Greg Kurz  wrote:
> 
> > On Thu, 28 Apr 2016 11:45:41 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > On 27 April 2016 at 19:12, Greg Kurz  wrote:
> > >
> > > > On Wed, 27 Apr 2016 16:39:58 +0200
> > > > Pradeep Kiruvale  wrote:
> > > >
> > > > > On 27 April 2016 at 10:38, Alberto Garcia  wrote:
> > > > >
> > > > > > On Wed, Apr 27, 2016 at 09:29:02AM +0200, Pradeep Kiruvale wrote:
> > > > > >
> > > > > > > Thanks for the reply. I am still in the early phase, I will let
> > you
> > > > > > > know if any changes are needed for the APIs.
> > > > > > >
> > > > > > > We might also have to implement throttle-group.c for 9p devices,
> > if
> > > > > > > we want to apply throttle for group of devices.
> > > > > >
> > > > > > Fair enough, but again please note that:
> > > > > >
> > > > > > - throttle-group.c is not meant to be generic, but it's tied to
> > > > > >   BlockDriverState / BlockBackend.
> > > > > > - it is currently being rewritten:
> > > > > >
> > https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00645.html
> > > > > >
> > > > > > If you can explain your use case with a bit more detail we can try
> > to
> > > > > > see what can be done about it.
> > > > > >
> > > > > >
> > > > > We want to use  virtio-9p for block io instead of virtio-blk-pci.
> > But in
> > > > > case of
> > > >
> > > > 9p is mostly aimed at sharing files... why would you want to use it for
> > > > block io instead of a true block device ? And how would you do that ?
> > > >
> > >
> > > *Yes, we want to share the files itself. So we are using the virtio-9p.*
> >
> > You want to pass a disk image to the guest as a plain file on a 9p mount ?
> > And then, what do you do in the guest ? Attach it to a loop device ?
> >
> 
> Yes, would like to mount as  a 9p drive and create file inside that and
> read/write.
> This was the experiment we are doing, actual use case no idea. My work is
> to do
> a feasibility test does it work or not.
> 
> 
> >
> > > *We want to have QoS on these files access for every VM.*
> > >
> >
> > You won't be able to have QoS on selected files, but it may be possible to
> > introduce limits at the fsdev level: control all write accesses to all
> > files
> > and all read accesses to all files for a 9p device.
> >
> 
> That is right, I do not want to have QoS for individual files but to whole
> fsdev device.
> 
> 
> > >
> > > >
> > > > > virtio-9p we can just use fsdev devices, so we want to apply
> > throttling
> > > > > (QoS)
> > > > > on these devices and as of now the io throttling only possible with
> > the
> > > > > -drive option.
> > > > >
> > > >
> > > > Indeed.
> > > >
> > > > > As a work around we are doing the throttling using cgroup. It has
> > its own
> > > > > costs.
> > > >
> > > > Can you elaborate ?
> > > >
> > >
> > > *We saw that we need to create cgroups and set it and also we observed
> > lot
> > > of iowaits *
> > > *compared to implementing the throttling inside the qemu.*
> > > *This we did observe by using the virtio-blk-pci devices. (Using cgroups
> > Vs
> > > qemu throttling)*
> > >
> >
> 
> 
> >
> > Just to be sure I get it right.
> >
> > You tried both:
> > 1) run QEMU with -device virtio-blk-pci and -drive throttling.*
> > 2) run QEMU with -device virtio-blk-pci in its own cgroup
> >
> > And 1) has better performance and is easier to use than 2) ?
> >
> > And what do you expect with 9p compared to 1) ?
> >
> >
> That was just to understand the cost of cpu
>  io throttling inside the qemu vs using cgroup.
> 
> The bench-marking we did to reproduce the numbers and understand the cost
> mentioned in
> 
> http://www.linux-kvm.org/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf
> 
> Thanks,
> Pradeep
> 

Ok. So you did compare current QEMU block I/O throttling with cgroup ? And you 
observed numbers
similar to the link above ?

And now you would like to run the same test on a file in a 9p mount with 
experimental 9p QoS ?

Maybe possible to reuse the throttle.h API and hack v9fs_write() and 
v9fs_read() in 9p.c then.

Cheers.

--
Greg

> 
> > >
> > > Thanks,
> > > Pradeep
> >
> >




Re: [Qemu-discuss] [Qemu-devel] iolimits for virtio-9p

2016-05-06 Thread Greg Kurz
On Fri, 6 May 2016 08:01:09 +0200
Pradeep Kiruvale  wrote:

> On 4 May 2016 at 17:40, Greg Kurz  wrote:
> 
> > On Mon, 2 May 2016 17:49:26 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > On 2 May 2016 at 14:57, Greg Kurz  wrote:
> > >
> > > > On Thu, 28 Apr 2016 11:45:41 +0200
> > > > Pradeep Kiruvale  wrote:
> > > >
> > > > > On 27 April 2016 at 19:12, Greg Kurz 
> > wrote:
> > > > >
> > > > > > On Wed, 27 Apr 2016 16:39:58 +0200
> > > > > > Pradeep Kiruvale  wrote:
> > > > > >
> > > > > > > On 27 April 2016 at 10:38, Alberto Garcia 
> > wrote:
> > > > > > >
> > > > > > > > On Wed, Apr 27, 2016 at 09:29:02AM +0200, Pradeep Kiruvale
> > wrote:
> > > > > > > >
> > > > > > > > > Thanks for the reply. I am still in the early phase, I will
> > let
> > > > you
> > > > > > > > > know if any changes are needed for the APIs.
> > > > > > > > >
> > > > > > > > > We might also have to implement throttle-group.c for 9p
> > devices,
> > > > if
> > > > > > > > > we want to apply throttle for group of devices.
> > > > > > > >
> > > > > > > > Fair enough, but again please note that:
> > > > > > > >
> > > > > > > > - throttle-group.c is not meant to be generic, but it's tied to
> > > > > > > >   BlockDriverState / BlockBackend.
> > > > > > > > - it is currently being rewritten:
> > > > > > > >
> > > > https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00645.html
> > > > > > > >
> > > > > > > > If you can explain your use case with a bit more detail we can
> > try
> > > > to
> > > > > > > > see what can be done about it.
> > > > > > > >
> > > > > > > >
> > > > > > > We want to use  virtio-9p for block io instead of virtio-blk-pci.
> > > > But in
> > > > > > > case of
> > > > > >
> > > > > > 9p is mostly aimed at sharing files... why would you want to use
> > it for
> > > > > > block io instead of a true block device ? And how would you do
> > that ?
> > > > > >
> > > > >
> > > > > *Yes, we want to share the files itself. So we are using the
> > virtio-9p.*
> > > >
> > > > You want to pass a disk image to the guest as a plain file on a 9p
> > mount ?
> > > > And then, what do you do in the guest ? Attach it to a loop device ?
> > > >
> > >
> > > Yes, would like to mount as  a 9p drive and create file inside that and
> > > read/write.
> > > This was the experiment we are doing, actual use case no idea. My work is
> > > to do
> > > a feasibility test does it work or not.
> > >
> > >
> > > >
> > > > > *We want to have QoS on these files access for every VM.*
> > > > >
> > > >
> > > > You won't be able to have QoS on selected files, but it may be
> > possible to
> > > > introduce limits at the fsdev level: control all write accesses to all
> > > > files
> > > > and all read accesses to all files for a 9p device.
> > > >
> > >
> > > That is right, I do not want to have QoS for individual files but to
> > whole
> > > fsdev device.
> > >
> > >
> > > > >
> > > > > >
> > > > > > > virtio-9p we can just use fsdev devices, so we want to apply
> > > > throttling
> > > > > > > (QoS)
> > > > > > > on these devices and as of now the io throttling only possible
> > with
> > > > the
> > > > > > > -drive option.
> > > > > > >
> > > > > >
> > > > > > Indeed.
> > > > > >
> > > > > > > As a work around we are doing the throttling using cgroup. It has
> > > > its own
> > > > > > > costs.
> > > > > >
> > > > > > Can you elaborate ?
> > > > > >
> > > > >
> > > > > *We saw tha

Re: [Qemu-discuss] [Qemu-devel] Adding throttling to virtio-9p

2016-06-07 Thread Greg Kurz
On Tue, 7 Jun 2016 10:20:51 +0200
Pradeep Kiruvale  wrote:
> Hi All,
> 
> I am trying to add the throttle to the virtio-9p devices using the throttle
> APIs that are already exists in the qemu.
> 
> I need help to understand the device model and where to add the throttling.
> I am digging through the code since a week or so but failed to understand
> how to tell the driver (virtio-9p) about the throttle enablement  for this
> specific fs device.
> 
> I am planning to enable and configure the throttle for that specific device
> at
> int qemu_fsdev_add(QemuOpts *opts) in qemu-fsdev.c file.
> 
> After that I would like to just add throttle facility to just
> virtio-9p-local.c (i.e read/wrote calls).
> Though there are other drivers, I just want to add to this specific driver
> as of now.
> 
> This is where I am missing the link like how to get to know the device
> configuration for this specific device throttle enabled or not and carry
> out the operations accordingly.
> 
> Please help me to understand.
> 

Drivers can register a parse_opts operation to handle specific command line
options. Since you want to make this feature specific to the local driver,
then you should handle the options in local_parse_opts(), not in 
qemu_fsdev_add().

> Regards,
> Pradeep

Cheers.

--
Greg



Re: [Qemu-discuss] [Qemu-devel] virtfs / 9p - and "Permission denied" in guests

2016-09-02 Thread Greg Kurz
On Fri, 2 Sep 2016 13:39:05 +0100
lejeczek  wrote:

> hi devel, sorry to bother you but tried "users" without any 
> luck and am hoping to get some help here.
> 
> I'm trying to passthrough host's filesystem, first time but 
> a pretty regular setup, and guests mount that mount tag, I 
> can list in a guest mountpoint's content, I see files & dirs 
> but  when I try to add/create/remove content in the 
> mountpoint then it gets denied:
> 
> $ touch DDD
> touch: setting times of `DDD': No such file or directory
> 

This is strange indeed. Was the 'DDD' file already present in the directory ?

> in libvirt:
> 
>  
> dir='/__.aLocalStorages/2/__.home.usersSecondHome'/>  
>
> slot='0x09' function='0x0'/>
>  
> 

I do not use libvirt so often and I'm not sure how this gets translated
to QEMU arguments. Can you try to reproduce with QEMU directly (following
the instructions at http://wiki.qemu-project.org/Documentation/9psetup) ?
Or at least provide the QEMU command line generated by libvirt ?

> I believe I fished out all SE denials and logs are not clear 
> of any sealerts.
> I suggests "time" but time seems in sync between host and guest.

The "setting times" seems to indicate that "touch" could at least open or
create the file, but that utimensat() failed indeed... but ENOENT indicates
a problem with the file path, not the time.

> Would you have some suggestions how to troubleshoot?

Maybe you can provide the output of 'strace touch foo' where 'foo' does not 
already
exist ?

> many thanks.
> L
> 

Cheers.

--
Greg



Re: [Qemu-discuss] Install multiple forks of QEMU

2016-09-07 Thread Greg Kurz
On Wed, 7 Sep 2016 10:28:26 -0500
"Stephen Bates"  wrote:

> Hi
> 
> Apologies in advance if this information is already available but a search
> of the WWW and the qemu mailing archives did not yield anything:
> 
> I am working on a few different topics (RISC-V, NVMe, PMEM) and need to
> have multiple forks of QEMU on my system at the same time. Is there a way
> of installing qemu in different locations and have them play nicely
> together? The ./configure does not mention an install location directive,
> nor does the Makefile.
> 

Hmm... I have a bunch of QEMU installed on my system for development purpose:

$ ./configure --help |& grep '\--prefix'
  --prefix=PREFIX  install in PREFIX [/usr/local]

Cheers.

--
Greg

> Ideally I would like to install the upstream version of qemu at the
> default location and install the forks in /opt/qemu/. Another
> option I can see is to use docker containers for each one but I would like
> to avoid that if possible.
> 
> If I work out how to do this and if the documentation in this area *is*
> lacking I'll be happy to submit a patch to update the documents.
> 
> Thanks!
> 
> Stephen Bates
> 




Re: [Qemu-discuss] How to build latest stable version of QEMU?

2016-09-14 Thread Greg Kurz
On Wed, 14 Sep 2016 15:09:52 +0300
Utku Gültopu  wrote:

> Sorry for this very basic question but I wanted to make sure I am doing this 
> correct.
> 

Hi,

> If I want to build QEMU’s latest stable version (instead of the latest 
> development version), am I supposed to follow the following command line 
> sequence?
> 
>   git clone git://git.qemu-project.org/qemu.git

cd qemu

;)

>   git checkout stable-2.6
>   mkdir build
>   cd build
>   ../configure
>   make
> 
> Best regards

This is correct, but it will build all the targets and may take some time.
If you need fewer targets or even one, you can pass the --target-list option
to configure (see configure --help for the full list).

Cheers.

--
Greg



Re: [Qemu-discuss] Accessing a shared folder

2017-08-30 Thread Greg Kurz
On Wed, 30 Aug 2017 09:28:57 + (UTC)
Mahmood  wrote:

> >Could you please try to replace the -virtfs option with these two options:
> >
>  >-fsdev local,id=shared,path=/home/mahmood/Downloads \
>  >-device virtio-9p-pci,fsdev=shared,mount_tag=Downloads  
> 
> 
> 
> 
> Still get the same error!
> 
> mahmood@cluster:qemu-vm$ qemu-system-x86_64 -m 4000 -cpu Opteron_G5 -smp 2 
> -hda centos7server.img -boot c  -usbdevice tablet -enable-kvm -device 
> e1000,netdev=host_files -netdev user,net=10.0.2.0/24,id=host_files -fsdev 
> local,id=shared,path=/home/mahmood/Downloads -device 
> virtio-9p-pci,fsdev=shared,mount_tag=Downloads
> qemu-system-x86_64: -device virtio-9p-pci,fsdev=shared,mount_tag=Downloads: 
> Parameter 'driver' expects device type
> mahmood@cluster:qemu-vm$
> 

Hi,

Both -virtfs and -fsdev/-device syntaxes work for me with the current QEMU
master branch :) Where's your qemu-system-x86_64 binary coming from ?

Cheers,

--
Greg

PS: I'm on vacation. I'll be fully available next week.


> 
> 
> 
> Regards,
> Mahmood
> 



pgp39C1IpyH5l.pgp
Description: OpenPGP digital signature


Re: [Qemu-discuss] Accessing a shared folder

2017-08-30 Thread Greg Kurz
On Wed, 30 Aug 2017 12:17:22 +0200
Thomas Huth  wrote:

> On 30.08.2017 12:11, Greg Kurz wrote:
> [...]
> > Hi,
> > 
> > Both -virtfs and -fsdev/-device syntaxes work for me with the current QEMU
> > master branch :) Where's your qemu-system-x86_64 binary coming from ?  
> 
> There is at least one problem with -virtfs if you forget to specify
> the "security_model=xxx" option:
> 
> $ x86_64-softmmu/qemu-system-x86_64 -virtfs 
> local,id=shared,path=/tmp,mount_tag=tag
> qemu-system-x86_64: util/qemu-option.c:547: opt_set: Assertion `opt->str' 
> failed.
> Aborted (core dumped)
> 

Yeah, we should print out that security_model is missing instead of dumping
core... :-\

> According to the qemu-doc, the securit_model is optional, so it
> should be possible to run qemu without it, too, shouldn't it?
> 

Hmm... the documentation is a bit misleading. We indeed have:

-virtfs fsdriver[,path=path],mount_tag=mount_tag[,security_model=security_model]
[,writeout=writeout][,readonly][,socket=socket|sock_fd=sock_fd]

but the description of security_model says: 

Security model is mandatory only for local fsdriver. Other fsdrivers (like
handle, proxy) don't take security model as a parameter.

The same goes for the proxy fsdriver which needs socket or sock_fd, and doesn't 
use
path.

Should we have a -virtfs line for each fsdriver ?

>  Thomas



pgpINdAGRev_A.pgp
Description: OpenPGP digital signature


Re: [Qemu-discuss] Accessing a shared folder

2017-08-30 Thread Greg Kurz
On Wed, 30 Aug 2017 14:35:00 + (UTC)
Mahmood  wrote:

> OK. I reconfigured 2.9.0 with --enable-virtfs. Please note:
> 1- If I use -virtfs option, I get 
>  qemu-option.c:547: opt_set: Assertion `opt->str' failed
> 

If you use the local fsdriver, security_model is mandatory with -virtfs just
like it is with -fsdev.

> 2- If I use -fsdev and -device, then I *must* use security_model
> 

True, as indicated in the QEMU manpage:

"Security model is mandatory only for local fsdriver.
 Other fsdrivers (like handle, proxy) don't take security model
 as a parameter."

> 3- If I use -fsdev and -device and security_model, then the guest boots 
> normally.
> 
> I haven't tried to see if I am able to access the shared folder nor not. Do 
> you have any note on the above items?
> 
> 
> Regards,
> Mahmood
> 

Cheers,

--
Greg


pgpmWs5FAycBz.pgp
Description: OpenPGP digital signature


Re: [Qemu-discuss] virtio-scsi really slow init with ArchLinux kernel

2018-07-11 Thread Greg Kurz
On Tue, 10 Jul 2018 08:53:00 -0400
Chris  wrote:

> I'm getting a 15 second delay on every VM boot when using the
> ArchLinux kernel and using the virtio-scsi-pci system.
> 
> QEMU emulator version 2.12.0 running on Arch Linux (4.17.4 kernel),
> booting the same.
> 
> I run qemu like so:
> 
> qemu-system-x86_64 \
>-nodefaults \
>-machine type=pc,accel=kvm -smp cores=2,threads=1 -cpu host -vga
> vmware -m 2G \
>-device virtio-scsi-pci,id=scsi0,num_queues=2 \
>-drive 
> id=hdroot,file=archlinux.qcow2,if=none,media=disk,cache=unsafe,format=qcow2
> \
>-device scsi-hd,drive=hdroot
> 
> See the 15 second hang here:
> 
> [0.577018] scsi host2: Virtio SCSI HBA
> [0.578413] scsi 2:0:0:0: Direct-Access QEMU QEMU HARDDISK
>   2.5+ PQ: 0 ANSI: 5
> [1.333550] tsc: Refined TSC clocksource calibration: 2800.036 MHz
> [1.335351] clocksource: tsc: mask: 0x max_cycles:
> 0x285c62b0192, max_idle_ns: 440795270636 ns
> [   17.134876] sd 2:0:0:0: Power-on or device reset occurred
> [   17.137683] sd 2:0:0:0: [sda] 104857600 512-byte logical blocks:
> (53.7 GB/50.0 GiB)
> [   17.139845] sd 2:0:0:0: [sda] Write Protect is off
> [   17.140791] sd 2:0:0:0: [sda] Mode Sense: 63 00 00 08
> [   17.140921] sd 2:0:0:0: [sda] Write cache: enabled, read cache:
> enabled, doesn't support DPO or FUA
> [   17.143968]  sda: sda1 sda2
> 
> This is specific to the Arch Linux kernel and the virtio-scsi system.
> IDE boots fast. If I boot an Ubuntu kernel with the exact same
> virtio-scsi settings then there is no delay for the "Power-on or
> device reset occurred" and it boots instantly.
> 
> Anyone know what is going on or what I can do to debug this?
> 
> Thanks
> 

I've been observing a similar delay on ppc64 with fedora28 guests:

# dmesg | egrep 'scsi| sd '
[1.530946] scsi host0: Virtio SCSI HBA
[1.532452] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK2.5+ 
PQ: 0 ANSI: 5
[   21.928378] sd 0:0:0:0: Power-on or device reset occurred
[   21.930012] sd 0:0:0:0: Attached scsi generic sg0 type 0
[   21.931554] sd 0:0:0:0: [sda] 83886080 512-byte logical blocks: (42.9 
GB/40.0 GiB)
[   21.931929] sd 0:0:0:0: [sda] Write Protect is off
[   21.933110] sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08
[   21.934084] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, 
doesn't support DPO or FUA
[   21.943566] sd 0:0:0:0: [sda] Attached SCSI disk

Kernel version is 4.16.16-300.fc28.ppc64. And I cannot reproduce the
issue with other distros that have an older kernel, eg, ubuntu 18.04
with kernel 4.15.0-23-generic.

My first guess is that it might be a kernel-side regression introduced
in 4.16... maybe bisect ?

Cheers,

--
Greg



Re: [Qemu-discuss] virtio-scsi really slow init with ArchLinux kernel

2018-07-13 Thread Greg Kurz
On Wed, 11 Jul 2018 13:45:09 -0400
Chris  wrote:

> On Wed, Jul 11, 2018 at 12:43 PM, Greg Kurz  wrote:
> > I've been observing a similar delay on ppc64 with fedora28 guests:
> >
> > # dmesg | egrep 'scsi| sd '
> > [1.530946] scsi host0: Virtio SCSI HBA
> > [1.532452] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK
> > 2.5+ PQ: 0 ANSI: 5
> > [   21.928378] sd 0:0:0:0: Power-on or device reset occurred
> > [   21.930012] sd 0:0:0:0: Attached scsi generic sg0 type 0
> > [   21.931554] sd 0:0:0:0: [sda] 83886080 512-byte logical blocks: (42.9 
> > GB/40.0 GiB)
> > [   21.931929] sd 0:0:0:0: [sda] Write Protect is off
> > [   21.933110] sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08
> > [   21.934084] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, 
> > doesn't support DPO or FUA
> > [   21.943566] sd 0:0:0:0: [sda] Attached SCSI disk
> >
> > Kernel version is 4.16.16-300.fc28.ppc64. And I cannot reproduce the
> > issue with other distros that have an older kernel, eg, ubuntu 18.04
> > with kernel 4.15.0-23-generic.
> >
> > My first guess is that it might be a kernel-side regression introduced
> > in 4.16... maybe bisect ?  
> 
> Interesting. I just tried kernel 4.17.5 from the mainline ppa on
> Ubuntu 18.04 and now there is a delay. It's only 7.5 seconds but still
> noticeable. There was previously no delay with the 4.15 kernel.
> 
> Definitely seems like it could be something introduced in kernel 4.16.
> 
> Chris
> 

Bisect led me to this commit, merged in 4.16:

commit b5b6e8c8d3b4cbeb447a0f10c7d5de3caa573299
Author: Ming Lei 
Date:   Tue Mar 13 17:42:42 2018 +0800

scsi: virtio_scsi: fix IO hang caused by automatic irq vector affinity

Since commit 84676c1f21e8ff5 ("genirq/affinity: assign vectors to all
possible CPUs") it is possible to end up in a scenario where only
offline CPUs are mapped to an interrupt vector.

This is only an issue for the legacy I/O path since with blk-mq/scsi-mq
an I/O can't be submitted to a hardware queue if the queue isn't mapped
to an online CPU.

Fix this issue by forcing virtio-scsi to use blk-mq.


Also, I realized I don't see the issue if I start QEMU with -smp 1.

I'll continue digging but any suggestion is welcome :)

Cheers,

--
Greg



Re: [Qemu-discuss] How to convert from ifconfig to ip ?

2018-07-27 Thread Greg Kurz
On Fri, 27 Jul 2018 09:23:36 +0200
Pierre Couderc  wrote:

> Thank you very much, Pascal.
> 
> When I compare what is working for you :
> 
> 4: tap0:  mtu 1500 qdisc fq_codel 
> state DOWN group default qlen 1000
>      link/ether 7e:89:46:3d:b0:d4 brd ff:ff:ff:ff:ff:ff
>      inet 192.168.1.1/24  scope global tap0
>     valid_lft forever preferred_lft forever
> 
> 
>    what is workin for me (XP VM):
> 
> 4: tap0:  mtu 1500 qdisc 
> pfifo_fast state DOWN group default qlen 1000
>      link/ether 7e:2f:8b:e5:a3:1e brd ff:ff:ff:ff:ff:ff
>      inet 192.168.164.1/24 brd 192.168.164.255 scope global tap0
>     valid_lft forever preferred_lft forever
> 
> andd what is *NOT* working for me :
> 
> 3: tap0:  mtu 1500 qdisc pfifo_fast 
> state DOWN group default qlen 1000
>      link/ether 3a:07:06:79:0a:c5 brd ff:ff:ff:ff:ff:ff
>      inet 192.168.164.1/32 scope global tap0
>     valid_lft forever preferred_lft forever
>      inet6 fec0::3807:6ff:fe79:ac5/64 scope site deprecated mngtmpaddr 
> dynamic
>     valid_lft 36345sec preferred_lft 0sec
>      inet6 fe80::3807:6ff:fe79:ac5/64 scope link
>     valid_lft forever preferred_lft forever
> 
> the difference seems to be the presence of IPV6 in 'bad' case...
> 

And the /24 mask... maybe try to do the same as Pascal:

ip address add 192.168.164.1/24 dev tap0

> 
> On 07/27/2018 09:02 AM, Pascal wrote:
> > hello, this is working for me :
> >
> > # ip link show tap0
> > Device "tap0" does not exist.
> >
> > # ip tuntap add mode tap dev tap0 group kvm
> >
> > # ip link show tap0
> > 4: tap0:  mtu 1500 qdisc noop state DOWN mode 
> > DEFAULT group default qlen 1000
> >     link/ether 7e:89:46:3d:b0:d4 brd ff:ff:ff:ff:ff:ff
> >
> > # ip link set dev tap0 up
> >
> > # ip link show tap0
> > 4: tap0:  mtu 1500 qdisc fq_codel 
> > state DOWN mode DEFAULT group default qlen 1000
> >     link/ether 7e:89:46:3d:b0:d4 brd ff:ff:ff:ff:ff:ff
> >
> > # ip addr add 192.168.1.1/24  dev tap0
> >
> > # ip addr show tap0
> > 4: tap0:  mtu 1500 qdisc fq_codel 
> > state DOWN group default qlen 1000
> >     link/ether 7e:89:46:3d:b0:d4 brd ff:ff:ff:ff:ff:ff
> >     inet 192.168.1.1/24  scope global tap0
> >    valid_lft forever preferred_lft forever
> >
> > # ping -c 4 192.168.1.1
> > PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
> > 64 bytes from 192.168.1.1 : icmp_seq=1 ttl=64 
> > time=0.021 ms
> > 64 bytes from 192.168.1.1 : icmp_seq=2 ttl=64 
> > time=0.015 ms
> > 64 bytes from 192.168.1.1 : icmp_seq=3 ttl=64 
> > time=0.019 ms
> > 64 bytes from 192.168.1.1 : icmp_seq=4 ttl=64 
> > time=0.015 ms
> >
> > --- 192.168.1.1 ping statistics ---
> > 4 packets transmitted, 4 received, 0% packet loss, time 3107ms
> > rtt min/avg/max/mdev = 0.015/0.017/0.021/0.005 ms
> >
> > regards.
> >
> > 2018-07-26 19:19 GMT+02:00 Pierre Couderc  > >:
> >
> > My bridge for qemu is started with :
> >
> > sysctl net.ipv4.ip_forward=1
> > tunctl -t tap0 -u nous
> > ifconfig tap0 192.168.164.1 up
> > iptables...
> >
> > I have replaced ipconfig line with ip:
> >
> > ip link set tap0 up
> > ip address add 192.168.164.1 dev tap0
> >
> > but it fails (no ping 192.168.164.1) in the vm.
> >
> > What do I miss ?
> >
> > Thanks
> > PC
> >
> >
> >
> >  
> 




Re: [Qemu-discuss] Slow boot in QEMU with virtio-scsi disks

2018-08-22 Thread Greg Kurz
On Sat, 11 Aug 2018 19:39:56 +0200
Oleksandr Natalenko  wrote:

> Hi.
> 
> On 11.08.2018 14:23, Ming Lei wrote:
> > Please test for-4.19/block:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-4.19/block
> > 
> > This slow boot issue should have been fixed by the following commits:
> > 
> > 1311326cf475 blk-mq: avoid to synchronize rcu inside 
> > blk_cleanup_queue()
> > 97889f9ac24f blk-mq: remove synchronize_rcu() from 
> > blk_mq_del_queue_tag_set()
> > 5815839b3ca1 blk-mq: introduce new lock for protecting 
> > hctx->dispatch_wait
> > 2278d69f030f blk-mq: don't pass **hctx to blk_mq_mark_tag_wait()
> > 8ab6bb9ee8d0 blk-mq: cleanup blk_mq_get_driver_tag()  
> 
> Indeed, I can confirm that these commits fix the issue.
> 
> Thanks a lot.
> 

So do I.

Thanks Ming Lei for the fix !

Cheers,

--
Greg



Re: [Qemu-discuss] Some questions about live migration support on virtio-9p

2019-07-31 Thread Greg Kurz
On Tue, 29 May 2018 08:11:32 +
Linzichang  wrote:

> Hi All,
>  I am using virtio 9pfs in guestOS now, and I’m wonder if live 
> migration supports 9pfs in the latest version of Qemu.

No it is not. There's a migration blocker as long as the 9p file system is
mounted in the guest.

> If not, I would be grateful to know exactly why. Is the device state or guest 
> & host shared memory hard to save?

Yes, sort of. First, I/O requests (or PDUs in 9pfs jargon) are serviced by
a thread pool, and we cannot save their state until they have completed.
This means that we must drain all in-flight PDUs. This isn't too hard. Then,
some enhancements are needed on the VMState side to be able to stream 9pfs
internal structures. Not too hard either. Then we begin to enter the ugly.
Mostly special cases, but they are common enough to be addressed:
- unlinked open files: since the path was unlinked, we cannot re-open it on
  the destination. This could be worked around by creating a temporary file
  on the destination and using the open fd we have on the source to copy
  the content of the original file to the temporary one, re-open it and then
  unlink it.
- files opened with O_ECXL: we cannot re-open the file on the target since
  it already exists, and dropping O_ECXL from the file status flags isn't
  really an option. I couldn't come up with a way to address that yet...

My latest try, about 2 years ago, is available here if you want to look:

https://github.com/gkurz/qemu/commits/9p-migration

> Best regards,
> Clare chen
> 

Cheers,

--
Greg

> 
> 华为技术有限公司 Huawei Technologies Co., Ltd.
> [Company_logo]
> 
> 
>  本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
> 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
> 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
> This e-mail and its attachments contain confidential information from HUAWEI, 
> which
> is intended only for the person or entity whose address is listed above. Any 
> use of the
> information contained herein in any way (including, but not limited to, total 
> or partial
> disclosure, reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, please 
> notify the sender by
> phone or email immediately and delete it!
>