Re: [Qemu-discuss] failed to start qemu after system restart

2016-04-08 Thread Narcis Garcia
Which OS distribution are you using?
Do you have a broken packages update?

Now you know the module kvm_amd works, but may not be loading
automatically on boot.


El 08/04/16 a les 08:37, Mahmood Naderan ha escrit:
>># modprobe kvm_amd
> 
> OK I saw nothing strange in the boot logs. Then I decided to run that
> command and finally I was able to use -enable-kvm!
> 
> For me its a bit odd because the output of 'lsmod' showed only 'kvm' and
> not 'kvm_amd' and running 'modprobe kvm' had no effect
> 
> Thanks for your help
>  
> Regards,
> Mahmood
> 
> 
> 



Re: [Qemu-discuss] failed to start qemu after system restart

2016-04-08 Thread Narcis Garcia
If it's CentOS 6, you have updates upto 6.7
You could apply these pending updates and see if it fixes some boot issue.

Any other measuse is to analyze boot logs to the detail.


El 08/04/16 a les 09:26, Mahmood Naderan ha escrit:
>> Which OS distribution are you using?
> 
> Cenos-6.5
> 
> 
>>Do you have a broken packages update?
> 
> I doubt...
> 
> 
> Regards,
> Mahmood
> 
> 
> 
> 



Re: [Qemu-discuss] failed to start qemu after system restart

2016-04-08 Thread Mahmood Naderan
OK I will try. Thank you very much. Regards,
Mahmood

Re: [Qemu-discuss] [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Thu, 7 Apr 2016 11:48:27 +0200
Pradeep Kiruvale  wrote:

> Hi All,
> 
> I am using virtio-9p for sharing the file between host and guest. To test
> the shared file I do read/write options in the guest.To have controlled io,
> I am using cgroup blkio.
> 
> While using cgroup I am facing two issues,Please find the issues below.
> 
> 1. When I do IO throttling using the cgroup the read throttling works fine
> but the write throttling does not wok. It still bypasses these throttling
> control and does the default, am I missing something here?
> 

Hi,

Can you provide details on your blkio setup ?

> I use the following commands to create VM, share the files and to
> read/write from guest.
> 
> *Create vm*
> qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128 -smp 1
> -enable-kvm -parallel  -fsdev
> local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate -device
> virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> 
> *Mount file*
> mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log &&
> sync
> 
> touch /sdb1_ext4/dddrive
> 
> *Write test*
> dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80 oflag=direct >>
> dd.log 2>&1 && sync
> 
> *Read test*
> dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> 
> 2. The other issue is when I run "dd" command inside guest  it creates
> multiple threads to write/read. I can see those on host using iotop is this
> expected behavior?
> 

Yes. QEMU uses a thread pool to handle 9p requests.

> Regards,
> Pradeep

Cheers.

--
Greg




Re: [Qemu-discuss] failed to start qemu after system restart

2016-04-08 Thread Jakob Bohm

On 08/04/2016 08:37, Mahmood Naderan wrote:

 ># modprobe kvm_amd

OK I saw nothing strange in the boot logs. Then I decided to run that
command and finally I was able to use -enable-kvm!

For me its a bit odd because the output of 'lsmod' showed only 'kvm' and
not 'kvm_amd' and running 'modprobe kvm' had no effect



Looking at cat /proc/modules, it is normally kvm_amd which loads kvm,
not the other way around.

Maybe CentOS 6 has a config file (named /etc/modules or something like
that) listing which modules should be automatically loaded at boot, if
there is such a file, adding kvm_amd there should fix the issue.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: [Qemu-discuss] [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Pradeep Kiruvale
Hi Greg,

Thanks for your reply.

Below is the way how I add to blkio

echo "8:16 8388608" >
/sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device

The problem I guess is adding these task ids to the "tasks" file in cgroup

These threads are started randomly and even then I add the PIDs to the
tasks file the cgroup still does not do IO control.

Is it possible to reduce these number of threads? I see different number of
threads doing IO at different runs.

Regards,
Pradeep


On 8 April 2016 at 10:10, Greg Kurz  wrote:

> On Thu, 7 Apr 2016 11:48:27 +0200
> Pradeep Kiruvale  wrote:
>
> > Hi All,
> >
> > I am using virtio-9p for sharing the file between host and guest. To test
> > the shared file I do read/write options in the guest.To have controlled
> io,
> > I am using cgroup blkio.
> >
> > While using cgroup I am facing two issues,Please find the issues below.
> >
> > 1. When I do IO throttling using the cgroup the read throttling works
> fine
> > but the write throttling does not wok. It still bypasses these throttling
> > control and does the default, am I missing something here?
> >
>
> Hi,
>
> Can you provide details on your blkio setup ?
>
> > I use the following commands to create VM, share the files and to
> > read/write from guest.
> >
> > *Create vm*
> > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128 -smp 1
> > -enable-kvm -parallel  -fsdev
> > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> -device
> > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> >
> > *Mount file*
> > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log &&
> > sync
> >
> > touch /sdb1_ext4/dddrive
> >
> > *Write test*
> > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80 oflag=direct >>
> > dd.log 2>&1 && sync
> >
> > *Read test*
> > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> >
> > 2. The other issue is when I run "dd" command inside guest  it creates
> > multiple threads to write/read. I can see those on host using iotop is
> this
> > expected behavior?
> >
>
> Yes. QEMU uses a thread pool to handle 9p requests.
>
> > Regards,
> > Pradeep
>
> Cheers.
>
> --
> Greg
>
>


[Qemu-discuss] Unable to debug 2.5.0 qemu-system-ppc64.exe with gdb

2016-04-08 Thread trasmussen
In order to find out where certain QEMU functions are called from, using 
test output is not practical.
Therefore I set out to use gdb to debug qemu-system-ppc64.exe (the 
unstripped version):

gdb /cygdrive/c/MinGW/msys/1.0/local/qemu/qemu-system-ppc64
b pci_host_config_write_common

gdb responds with:

Breakpoint 1 at 0x6ecd79: file 
C:/MinGW/msys/1.0/gnu_dev/qemu-2.5.0/hw/pci/pci_host.c, line 54.

Next I issue an r-command with the same options as I need to use:

r -m 240 -g 640x400x16 -name ppc -M ppce500 -cpu e5500 -bios u-boot.e500 
-icount 2 -gdb tcp:127.0.0.1:1234,ipv4 -netdev tap,id=vlan0,ifname=tap0 
-device virtio-net-pci,netdev=vlan0,id=virtio,bus=pci.0,addr=1.0, -kernel 
Boot.bin -initrd Boot.qemu -device VGA -vga std 

The gdb response here is:

[New Thread 9180.0x1c94]
Warning:
Cannot insert breakpoint 1.
Error accessing memory address 0x6ecd54: Input/output error.

I then delete breakpoint 1 (del 1), and continue (c), and the program 
starts running

gdb outputs
[New Thread 9180.0x12b8]
   ... (my test output lines) ...
[New Thread 9180.0x2234]
[New Thread 9180.0x233c]
...

and everything happens as usually, except I have not achieved the goal of 
hitting any breakpoints.

Upon closer examination it appears that the qemu-system-ppc64.exe that is 
linked to 0x40 and up as any other Windows application, in fact has 
been moved up higher in the address space a number of pages, but not 
always the same number. I have seen 0xb60 be the bias, and also 
0x4c and others. I found out by letting one of my test output lines 
print the current EIP-value, as well as the (linked) address of the 
function this happens inside. The 2 values were close to each other as 
expected, and both were biased.
I must assume that the move of the qemu-system-ppc64.exe sections means 
that this executable contains fixup relocations like what the objdump tool 
reports:

/cygdrive/c/MinGW/msys/1.0/local/qemu/qemu-system-ppc64.exe: file 
format pei-i386

Sections:
Idx Name  Size  VMA   LMA   File off  Algn
  0 .text 0086e6cc  00401000  00401000  0600  2**4
  CONTENTS, ALLOC, LOAD, READONLY, CODE, DATA
  1 .data 0007fbf4  00c7  00c7  0086ee00  2**5
  CONTENTS, ALLOC, LOAD, DATA
  2 .rdata0027d17c  00cf  00cf  008eea00  2**5
  CONTENTS, ALLOC, LOAD, READONLY, DATA
  3 .eh_frame 000d8bd4  00f6e000  00f6e000  00b6bc00  2**2
  CONTENTS, ALLOC, LOAD, READONLY, DATA
  4 .bss  004583c4  01047000  01047000    2**5
  ALLOC
  5 .edata1219  014a  014a  00c44800  2**2
  CONTENTS, ALLOC, LOAD, READONLY, DATA
  6 .idata311c  014a2000  014a2000  00c45c00  2**2
  CONTENTS, ALLOC, LOAD, DATA
  7 .CRT  0018  014a6000  014a6000  00c48e00  2**2
  CONTENTS, ALLOC, LOAD, DATA
  8 .tls  0020  014a7000  014a7000  00c49000  2**2
  CONTENTS, ALLOC, LOAD, DATA
  9 .rsrc 176c  014a8000  014a8000  00c49200  2**2
  CONTENTS, ALLOC, LOAD, DATA
 10 .reloc00066440  014aa000  014aa000  00c4aa00  2**2
  CONTENTS, ALLOC, LOAD, READONLY, DATA
 11 .debug_aranges 5198  01511000  01511000  00cb1000  2**0
  CONTENTS, READONLY, DEBUGGING
 12 .debug_info   00bca2a4  01517000  01517000  00cb6200  2**0
  CONTENTS, READONLY, DEBUGGING
 13 .debug_abbrev 00078c18  020e2000  020e2000  01880600  2**0
  CONTENTS, READONLY, DEBUGGING
 14 .debug_line   001a8dce  0215b000  0215b000  018f9400  2**0
  CONTENTS, READONLY, DEBUGGING
 15 .debug_str0005ceb2  02304000  02304000  01aa2200  2**0
  CONTENTS, READONLY, DEBUGGING
 16 .debug_loc00098cb6  02361000  02361000  01aff200  2**0
  CONTENTS, READONLY, DEBUGGING
 17 .debug_ranges d9e0  023fa000  023fa000  01b98000  2**0
  CONTENTS, READONLY, DEBUGGING


I am using GNU gdb:

 (GDB) 7.6.50.20130728-cvs (cygwin-special)

Is there a way to have this work?
Is it important that relocation takes place, or could it be fixed in the 
virtual address space as its usual 0x40 location?

I noticed the building of QEMU has -fPIC at least for compilation of at 
least some files, but also --static.
Though I am only building this particular qemu-system-ppc64 variant, it 
still takes me almost an hour to experiment with other compile and link 
options.

Any help is highly appreciated!

Thorkil B. Rasmussen


Re: [Qemu-discuss] [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Fri, 8 Apr 2016 11:51:05 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 
> Thanks for your reply.
> 
> Below is the way how I add to blkio
> 
> echo "8:16 8388608" >
> /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> 

Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
tasks in the test cgroup... but what about the tasks themselves ?

> The problem I guess is adding these task ids to the "tasks" file in cgroup
> 

Exactly. :)

> These threads are started randomly and even then I add the PIDs to the
> tasks file the cgroup still does not do IO control.
> 

How did you get the PIDs ? Are you sure these threads you have added to the
cgroup are the ones that write to /dev/sdb ?

> Is it possible to reduce these number of threads? I see different number of
> threads doing IO at different runs.
> 

AFAIK, no.

Why don't you simply start QEMU in the cgroup ? Unless I miss something, all
children threads, including the 9p ones, will be in the cgroup and honor the
throttle setttings.

> Regards,
> Pradeep
> 

Cheers.

--
Greg

> 
> On 8 April 2016 at 10:10, Greg Kurz  wrote:
> 
> > On Thu, 7 Apr 2016 11:48:27 +0200
> > Pradeep Kiruvale  wrote:
> >
> > > Hi All,
> > >
> > > I am using virtio-9p for sharing the file between host and guest. To test
> > > the shared file I do read/write options in the guest.To have controlled
> > io,
> > > I am using cgroup blkio.
> > >
> > > While using cgroup I am facing two issues,Please find the issues below.
> > >
> > > 1. When I do IO throttling using the cgroup the read throttling works
> > fine
> > > but the write throttling does not wok. It still bypasses these throttling
> > > control and does the default, am I missing something here?
> > >
> >
> > Hi,
> >
> > Can you provide details on your blkio setup ?
> >
> > > I use the following commands to create VM, share the files and to
> > > read/write from guest.
> > >
> > > *Create vm*
> > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128 -smp 1
> > > -enable-kvm -parallel  -fsdev
> > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > -device
> > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > >
> > > *Mount file*
> > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log &&
> > > sync
> > >
> > > touch /sdb1_ext4/dddrive
> > >
> > > *Write test*
> > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80 oflag=direct >>
> > > dd.log 2>&1 && sync
> > >
> > > *Read test*
> > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > >
> > > 2. The other issue is when I run "dd" command inside guest  it creates
> > > multiple threads to write/read. I can see those on host using iotop is
> > this
> > > expected behavior?
> > >
> >
> > Yes. QEMU uses a thread pool to handle 9p requests.
> >
> > > Regards,
> > > Pradeep
> >
> > Cheers.
> >
> > --
> > Greg
> >
> >




Re: [Qemu-discuss] [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Pradeep Kiruvale
Hi Greg,

FInd my replies inline

>
> > Below is the way how I add to blkio
> >
> > echo "8:16 8388608" >
> > /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> >
>
> Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
> tasks in the test cgroup... but what about the tasks themselves ?
>
> > The problem I guess is adding these task ids to the "tasks" file in
> cgroup
> >
>
> Exactly. :)
>
> > These threads are started randomly and even then I add the PIDs to the
> > tasks file the cgroup still does not do IO control.
> >
>
> How did you get the PIDs ? Are you sure these threads you have added to the
> cgroup are the ones that write to /dev/sdb ?
>

*Yes, I get PIDs from /proc/Qemu_PID/task*



>
> > Is it possible to reduce these number of threads? I see different number
> of
> > threads doing IO at different runs.
> >
>
> AFAIK, no.
>
> Why don't you simply start QEMU in the cgroup ? Unless I miss something,
> all
> children threads, including the 9p ones, will be in the cgroup and honor
> the
> throttle setttings.
>


*I started the qemu with cgroup as below*

*cgexec -g blkio:/test qemu...*
*Is there any other way of starting the qemu in cgroup?*

Regards,
Pradeep


>
> > Regards,
> > Pradeep
> >
>
> Cheers.
>
> --
> Greg
>
> >
> > On 8 April 2016 at 10:10, Greg Kurz  wrote:
> >
> > > On Thu, 7 Apr 2016 11:48:27 +0200
> > > Pradeep Kiruvale  wrote:
> > >
> > > > Hi All,
> > > >
> > > > I am using virtio-9p for sharing the file between host and guest. To
> test
> > > > the shared file I do read/write options in the guest.To have
> controlled
> > > io,
> > > > I am using cgroup blkio.
> > > >
> > > > While using cgroup I am facing two issues,Please find the issues
> below.
> > > >
> > > > 1. When I do IO throttling using the cgroup the read throttling works
> > > fine
> > > > but the write throttling does not wok. It still bypasses these
> throttling
> > > > control and does the default, am I missing something here?
> > > >
> > >
> > > Hi,
> > >
> > > Can you provide details on your blkio setup ?
> > >
> > > > I use the following commands to create VM, share the files and to
> > > > read/write from guest.
> > > >
> > > > *Create vm*
> > > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128
> -smp 1
> > > > -enable-kvm -parallel  -fsdev
> > > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > > -device
> > > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > > >
> > > > *Mount file*
> > > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4
> 2>>dd.log &&
> > > > sync
> > > >
> > > > touch /sdb1_ext4/dddrive
> > > >
> > > > *Write test*
> > > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80
> oflag=direct >>
> > > > dd.log 2>&1 && sync
> > > >
> > > > *Read test*
> > > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > > >
> > > > 2. The other issue is when I run "dd" command inside guest  it
> creates
> > > > multiple threads to write/read. I can see those on host using iotop
> is
> > > this
> > > > expected behavior?
> > > >
> > >
> > > Yes. QEMU uses a thread pool to handle 9p requests.
> > >
> > > > Regards,
> > > > Pradeep
> > >
> > > Cheers.
> > >
> > > --
> > > Greg
> > >
> > >
>
>


Re: [Qemu-discuss] Unable to debug 2.5.0 qemu-system-ppc64.exe with gdb

2016-04-08 Thread Jakob Bohm

On 08/04/2016 13:50, trasmus...@ddci.com wrote:
In order to find out where certain QEMU functions are called from, 
using test output is not practical.
Therefore I set out to use gdb to debug qemu-system-ppc64.exe (the 
unstripped version):


gdb /cygdrive/c/MinGW/msys/1.0/local/qemu/qemu-system-ppc64
b pci_host_config_write_common

gdb responds with:

Breakpoint 1 at 0x6ecd79: file 
C:/MinGW/msys/1.0/gnu_dev/qemu-2.5.0/hw/pci/pci_host.c, line 54.


Next I issue an r-command with the same options as I need to use:

r -m 240 -g 640x400x16 -name ppc -M ppce500 -cpu e5500 -bios 
u-boot.e500 -icount 2 -gdb tcp:127.0.0.1:1234,ipv4 -netdev 
tap,id=vlan0,ifname=tap0 -device 
virtio-net-pci,netdev=vlan0,id=virtio,bus=pci.0,addr=1.0, -kernel 
Boot.bin -initrd Boot.qemu -device VGA -vga std


The gdb response here is:

[New Thread 9180.0x1c94]
Warning:
Cannot insert breakpoint 1.
Error accessing memory address 0x6ecd54: Input/output error.

I then delete breakpoint 1 (del 1), and continue (c), and the program 
starts running


gdb outputs
[New Thread 9180.0x12b8]
   ... (my test output lines) ...
[New Thread 9180.0x2234]
[New Thread 9180.0x233c]
...

and everything happens as usually, except I have not achieved the goal 
of hitting any breakpoints.


Upon closer examination it appears that the qemu-system-ppc64.exe that 
is linked to 0x40 and up as any other Windows application, in fact 
has been moved up higher in the address space a number of pages, but 
not always the same number. I have seen 0xb60 be the bias, and 
also 0x4c and others. I found out by letting one of my test output 
lines print the current EIP-value, as well as the (linked) address of 
the function this happens inside. The 2 values were close to each 
other as expected, and both were biased.
I must assume that the move of the qemu-system-ppc64.exe sections 
means that this executable contains fixup relocations like what the 
objdump tool reports:



Actually, you are seeing that the Windows version you
use supports ASLR (Address Space Layout Randomization)
to reduce the risk that buffer overflow vulnerabilities
can be effectively exploited.

If gdb fails to adjust correctly to the resulting
relocation of the program, this would be a serious
gdb bug, given that ASLR is also implemented (though
differently) on Linux based systems.

It is possible (as you suggest yourself) to disable
ASLR for a windows exe by either:

A) Stripping away the relocations so the exe can only
  be loaded at the default address specified in the
  PE header.

B) Clearing/Setting one of the EXE feature flags (the
  one named "/DYNAMICBASE:NO" in Microsoft tools) in
  the PE header.

Of cause, doing this is enough of a security
disadvantage that Microsoft is releasing critical
security patches for any Microsoft program that has
been released with ASLR disabled, so it should only
be done while debugging.

/cygdrive/c/MinGW/msys/1.0/local/qemu/qemu-system-ppc64.exe:   file 
format pei-i386


Sections:
Idx Name  Size  VMA   LMA   File off  Algn
  0 .text   0086e6cc  00401000  00401000  0600  2**4
CONTENTS, ALLOC, LOAD, READONLY, CODE, DATA
  1 .data   0007fbf4  00c7  00c7  0086ee00  2**5
CONTENTS, ALLOC, LOAD, DATA
  2 .rdata  0027d17c  00cf  00cf  008eea00  2**5
CONTENTS, ALLOC, LOAD, READONLY, DATA
  3 .eh_frame 000d8bd4  00f6e000  00f6e000  00b6bc00  2**2
CONTENTS, ALLOC, LOAD, READONLY, DATA
  4 .bss004583c4  01047000  01047000    2**5
ALLOC
  5 .edata  1219  014a  014a  00c44800  2**2
CONTENTS, ALLOC, LOAD, READONLY, DATA
  6 .idata  311c  014a2000  014a2000  00c45c00  2**2
CONTENTS, ALLOC, LOAD, DATA
  7 .CRT0018  014a6000  014a6000  00c48e00  2**2
CONTENTS, ALLOC, LOAD, DATA
  8 .tls0020  014a7000  014a7000  00c49000  2**2
CONTENTS, ALLOC, LOAD, DATA
  9 .rsrc   176c  014a8000  014a8000  00c49200  2**2
CONTENTS, ALLOC, LOAD, DATA
 10 .reloc  00066440  014aa000  014aa000  00c4aa00  2**2
CONTENTS, ALLOC, LOAD, READONLY, DATA
 11 .debug_aranges 5198  01511000  01511000  00cb1000  2**0
CONTENTS, READONLY, DEBUGGING
 12 .debug_info   00bca2a4  01517000  01517000  00cb6200  2**0
CONTENTS, READONLY, DEBUGGING
 13 .debug_abbrev 00078c18  020e2000  020e2000  01880600  2**0
CONTENTS, READONLY, DEBUGGING
 14 .debug_line   001a8dce  0215b000  0215b000  018f9400  2**0
CONTENTS, READONLY, DEBUGGING
 15 .debug_str0005ceb2  02304000  02304000  01aa2200  2**0
CONTENTS, READONLY, DEBUGGING
 16 .debug_loc00098cb6  02361000  02361000  01aff200  2**0
CONTENTS, READONLY, DEBUGGING
 17 .debug_ranges d9e0  023fa000  023fa000  01b98000  2**0
CONTENTS, READONLY, DEBUGGING


Note that the above output also indicates that the used
version of objdump seems to parse COFF section descriptors
incorrectly.  Each descriptor contains the following
size/offs

Re: [Qemu-discuss] [Qemu-devel] Virtio-9p and cgroup io-throttling

2016-04-08 Thread Greg Kurz
On Fri, 8 Apr 2016 14:55:29 +0200
Pradeep Kiruvale  wrote:

> Hi Greg,
> 
> FInd my replies inline
> 
> >
> > > Below is the way how I add to blkio
> > >
> > > echo "8:16 8388608" >
> > > /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device
> > >
> >
> > Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all
> > tasks in the test cgroup... but what about the tasks themselves ?
> >
> > > The problem I guess is adding these task ids to the "tasks" file in
> > cgroup
> > >
> >
> > Exactly. :)
> >
> > > These threads are started randomly and even then I add the PIDs to the
> > > tasks file the cgroup still does not do IO control.
> > >
> >
> > How did you get the PIDs ? Are you sure these threads you have added to the
> > cgroup are the ones that write to /dev/sdb ?
> >
> 
> *Yes, I get PIDs from /proc/Qemu_PID/task*
> 

And then you echoed the PIDs to /sys/fs/cgroup/blkio/test/tasks ?

This is racy... another IO thread may be started to do some work on /dev/sdb
just after you've read PIDs from /proc/Qemu_PID/task, and it won't be part
of the cgroup.

> 
> 
> >
> > > Is it possible to reduce these number of threads? I see different number
> > of
> > > threads doing IO at different runs.
> > >
> >
> > AFAIK, no.
> >
> > Why don't you simply start QEMU in the cgroup ? Unless I miss something,
> > all
> > children threads, including the 9p ones, will be in the cgroup and honor
> > the
> > throttle setttings.
> >
> 
> 
> *I started the qemu with cgroup as below*
> 
> *cgexec -g blkio:/test qemu...*
> *Is there any other way of starting the qemu in cgroup?*
> 

Maybe you can pass --sticky to cgexec to prevent cgred from moving
children tasks to other cgroups...

There's also the old fashion method:

# echo $$ > /sys/fs/cgroup/blkio/test/tasks
# qemu.

This being said, QEMU is a regular userspace program that is completely cgroup
agnostic. It won't behave differently than 'dd if=/dev/sdb of=/dev/null'.

This really doesn't look like a QEMU related issue to me.

> Regards,
> Pradeep
> 

Cheers.

--
Greg

> 
> >
> > > Regards,
> > > Pradeep
> > >
> >
> > Cheers.
> >
> > --
> > Greg
> >
> > >
> > > On 8 April 2016 at 10:10, Greg Kurz  wrote:
> > >
> > > > On Thu, 7 Apr 2016 11:48:27 +0200
> > > > Pradeep Kiruvale  wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I am using virtio-9p for sharing the file between host and guest. To
> > test
> > > > > the shared file I do read/write options in the guest.To have
> > controlled
> > > > io,
> > > > > I am using cgroup blkio.
> > > > >
> > > > > While using cgroup I am facing two issues,Please find the issues
> > below.
> > > > >
> > > > > 1. When I do IO throttling using the cgroup the read throttling works
> > > > fine
> > > > > but the write throttling does not wok. It still bypasses these
> > throttling
> > > > > control and does the default, am I missing something here?
> > > > >
> > > >
> > > > Hi,
> > > >
> > > > Can you provide details on your blkio setup ?
> > > >
> > > > > I use the following commands to create VM, share the files and to
> > > > > read/write from guest.
> > > > >
> > > > > *Create vm*
> > > > > qemu-system-x86_64 -balloon none ...-name vm0 -cpu host -m 128
> > -smp 1
> > > > > -enable-kvm -parallel  -fsdev
> > > > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate
> > > > -device
> > > > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1
> > > > >
> > > > > *Mount file*
> > > > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4
> > 2>>dd.log &&
> > > > > sync
> > > > >
> > > > > touch /sdb1_ext4/dddrive
> > > > >
> > > > > *Write test*
> > > > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=80
> > oflag=direct >>
> > > > > dd.log 2>&1 && sync
> > > > >
> > > > > *Read test*
> > > > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync
> > > > >
> > > > > 2. The other issue is when I run "dd" command inside guest  it
> > creates
> > > > > multiple threads to write/read. I can see those on host using iotop
> > is
> > > > this
> > > > > expected behavior?
> > > > >
> > > >
> > > > Yes. QEMU uses a thread pool to handle 9p requests.
> > > >
> > > > > Regards,
> > > > > Pradeep
> > > >
> > > > Cheers.
> > > >
> > > > --
> > > > Greg
> > > >
> > > >
> >
> >