[Qemu-devel] The status about vhost-net on kvm-arm?

2014-08-11 Thread Li Liu
Hi all,

Is anyone there can tell the current status of vhost-net on kvm-arm?

Half a year has passed from Isa Ansharullah asked this question:
http://www.spinics.net/lists/kvm-arm/msg08152.html

I have found two patches which have provided the kvm-arm support of
eventfd and irqfd:

1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html

2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
https://patches.linaro.org/32261/

And there's a rough patch for qemu to support eventfd from Ying-Shiuan Pan:

[Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html

But there no any comments of this patch. And I can found nothing about qemu
to support irqfd. Do I lost the track?

If nobody try to fix it. We have a plan to complete it about virtio-mmio
supporing irqfd and multiqueue.









Re: [Qemu-devel] The status about vhost-net on kvm-arm?

2014-08-12 Thread Li Liu


On 2014/8/12 15:29, Eric Auger wrote:
> On 08/12/2014 04:41 AM, Li Liu wrote:
>> Hi all,
>>
>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>
>> Half a year has passed from Isa Ansharullah asked this question:
>> http://www.spinics.net/lists/kvm-arm/msg08152.html
>>
>> I have found two patches which have provided the kvm-arm support of
>> eventfd and irqfd:
>>
>> 1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
>> http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
>>
>> 2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
>> https://patches.linaro.org/32261/
> 
> Hi Li,
> 
> The patch below uses Paul Mackerras' work and removed usage of GSI
> routing table. It is a simpler alternative to 2)
> http://www.spinics.net/lists/kvm/msg106535.html
> 

Thanks for your tips. This looks more clear.

Best Regards

Li

>>
>> And there's a rough patch for qemu to support eventfd from Ying-Shiuan Pan:
>>
>> [Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>
>> But there no any comments of this patch. And I can found nothing about qemu
>> to support irqfd. Do I lost the track?
> 
> Actually I am using irqfd in QEMU VFIO Platform device
> https://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg01455.html
> 
> Best Regards
> 
> Eric
> 
>>
>> If nobody try to fix it. We have a plan to complete it about virtio-mmio
>> supporing irqfd and multiqueue.
>>
>>
>>
>>
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
> 
> 
> 




Re: [Qemu-devel] The status about vhost-net on kvm-arm?

2014-08-12 Thread Li Liu


On 2014/8/12 23:47, Nikolay Nikolaev wrote:
> Hello,
> 
> 
> On Tue, Aug 12, 2014 at 5:41 AM, Li Liu  wrote:
>>
>> Hi all,
>>
>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>
>> Half a year has passed from Isa Ansharullah asked this question:
>> http://www.spinics.net/lists/kvm-arm/msg08152.html
>>
>> I have found two patches which have provided the kvm-arm support of
>> eventfd and irqfd:
>>
>> 1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
>> http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
>>
>> 2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
>> https://patches.linaro.org/32261/
>>
>> And there's a rough patch for qemu to support eventfd from Ying-Shiuan Pan:
>>
>> [Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>
>> But there no any comments of this patch. And I can found nothing about qemu
>> to support irqfd. Do I lost the track?
>>
>> If nobody try to fix it. We have a plan to complete it about virtio-mmio
>> supporing irqfd and multiqueue.
>>
>>
> 
> we at Virtual Open Systems did some work and tested vhost-net on ARM
> back in March.
> The setup was based on:
>  - host kernel with our ioeventfd patches:
> http://www.spinics.net/lists/kvm-arm/msg08413.html
> 
> - qemu with the aforementioned patches from Ying-Shiuan Pan
> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
> 
> The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
> Ethernet adapter connected to a 1Gbps switch. I can't find the actual
> numbers but I remember that with multiple streams the gain was clearly
> seen. Note that it used the minimum required ioventfd implementation
> and not irqfd.
> 

Yeah, we have roughly tested vhost-net without irqfd and get the same
result. And now try to see what will happen with irqfd :).

> I guess it is feasible to think that it all can be put together and
> rebased + the recent irqfd work. One can achiev even better
> performance (because of the irqfd).
> 
>>
>>
>>
>>
>>
>> ___
>> kvmarm mailing list
>> kvm...@lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> 
> 
> regards,
> Nikolay Nikolaev
> Virtual Open Systems
> 
> .
> 




Re: [Qemu-devel] The status about vhost-net on kvm-arm?

2014-08-13 Thread Li Liu


On 2014/8/13 17:10, Nikolay Nikolaev wrote:
> On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
>  wrote:
>>
>> Hello,
>>
>>
>> On Tue, Aug 12, 2014 at 5:41 AM, Li Liu  wrote:
>>>
>>> Hi all,
>>>
>>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>>
>>> Half a year has passed from Isa Ansharullah asked this question:
>>> http://www.spinics.net/lists/kvm-arm/msg08152.html
>>>
>>> I have found two patches which have provided the kvm-arm support of
>>> eventfd and irqfd:
>>>
>>> 1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
>>> http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
>>>
>>> 2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
>>> https://patches.linaro.org/32261/
>>>
>>> And there's a rough patch for qemu to support eventfd from Ying-Shiuan Pan:
>>>
>>> [Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>>
>>> But there no any comments of this patch. And I can found nothing about qemu
>>> to support irqfd. Do I lost the track?
>>>
>>> If nobody try to fix it. We have a plan to complete it about virtio-mmio
>>> supporing irqfd and multiqueue.
>>>
>>>
>>
>> we at Virtual Open Systems did some work and tested vhost-net on ARM
>> back in March.
>> The setup was based on:
>>  - host kernel with our ioeventfd patches:
>> http://www.spinics.net/lists/kvm-arm/msg08413.html
>>
>> - qemu with the aforementioned patches from Ying-Shiuan Pan
>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>
>> The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
>> Ethernet adapter connected to a 1Gbps switch. I can't find the actual
>> numbers but I remember that with multiple streams the gain was clearly
>> seen. Note that it used the minimum required ioventfd implementation
>> and not irqfd.
>>
>> I guess it is feasible to think that it all can be put together and
>> rebased + the recent irqfd work. One can achiev even better
>> performance (because of the irqfd).
>>
> 
> Managed to replicate the setup with the old versions e used in March:
> 
> Single stream from another machine to chromebook with 1Gbps USB3
> Ethernet adapter.
> iperf -c  -P 1 -i 1 -p 5001 -f k -t 10
> to HOST: 858316 Kbits/sec
> to GUEST: 761563 Kbits/sec
> 
> 10 parallel streams
> iperf -c  -P 10 -i 1 -p 5001 -f k -t 10
> to HOST: 842420 Kbits/sec
> to GUEST: 625144 Kbits/sec
> 

Appreciate your work. Is it convenient for you to test the same cases
without vhost=on? Then the results will show the improvement of performance
clearly only with ioeventfd.

I will try to test it with a Hisilicon board which is ongoing.

Best regards

Li

>>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> kvmarm mailing list
>>> kvm...@lists.cs.columbia.edu
>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>
>>
>> regards,
>> Nikolay Nikolaev
>> Virtual Open Systems
> 
> .
> 




Re: [Qemu-devel] The status about vhost-net on kvm-arm?

2014-08-13 Thread Li Liu


On 2014/8/13 19:25, Nikolay Nikolaev wrote:
> On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
>  wrote:
>> On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
>>  wrote:
>>>
>>> Hello,
>>>
>>>
>>> On Tue, Aug 12, 2014 at 5:41 AM, Li Liu  wrote:
>>>>
>>>> Hi all,
>>>>
>>>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>>>
>>>> Half a year has passed from Isa Ansharullah asked this question:
>>>> http://www.spinics.net/lists/kvm-arm/msg08152.html
>>>>
>>>> I have found two patches which have provided the kvm-arm support of
>>>> eventfd and irqfd:
>>>>
>>>> 1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
>>>> http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
>>>>
>>>> 2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
>>>> https://patches.linaro.org/32261/
>>>>
>>>> And there's a rough patch for qemu to support eventfd from Ying-Shiuan Pan:
>>>>
>>>> [Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
>>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>>>
>>>> But there no any comments of this patch. And I can found nothing about qemu
>>>> to support irqfd. Do I lost the track?
>>>>
>>>> If nobody try to fix it. We have a plan to complete it about virtio-mmio
>>>> supporing irqfd and multiqueue.
>>>>
>>>>
>>>
>>> we at Virtual Open Systems did some work and tested vhost-net on ARM
>>> back in March.
>>> The setup was based on:
>>>  - host kernel with our ioeventfd patches:
>>> http://www.spinics.net/lists/kvm-arm/msg08413.html
>>>
>>> - qemu with the aforementioned patches from Ying-Shiuan Pan
>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>>
>>> The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
>>> Ethernet adapter connected to a 1Gbps switch. I can't find the actual
>>> numbers but I remember that with multiple streams the gain was clearly
>>> seen. Note that it used the minimum required ioventfd implementation
>>> and not irqfd.
>>>
>>> I guess it is feasible to think that it all can be put together and
>>> rebased + the recent irqfd work. One can achiev even better
>>> performance (because of the irqfd).
>>>
>>
>> Managed to replicate the setup with the old versions e used in March:
>>
>> Single stream from another machine to chromebook with 1Gbps USB3
>> Ethernet adapter.
>> iperf -c  -P 1 -i 1 -p 5001 -f k -t 10
>> to HOST: 858316 Kbits/sec
>> to GUEST: 761563 Kbits/sec
> to GUEST vhost=off: 508150 Kbits/sec
>>
>> 10 parallel streams
>> iperf -c  -P 10 -i 1 -p 5001 -f k -t 10
>> to HOST: 842420 Kbits/sec
>> to GUEST: 625144 Kbits/sec
> to GUEST vhost=off: 425276 Kbits/sec

I have tested the same cases on a Hisilicon board (Cortex-A15@1G)
with Integrated 1Gbps Ethernet adapter.

iperf -c  -P 1 -i 1 -p 5001 -f M -t 10
to HOST: 906 Mbits/sec
to GUEST: 562 Mbits/sec
to GUEST vhost=off: 340 Mbits/sec

10 parallel streams, the performance gets <10% plus:
iperf -c  -P 10 -i 1 -p 5001 -f M -t 10
to HOST: 923 Mbits/sec
to GUEST: 592 Mbits/sec
to GUEST vhost=off: 364 Mbits/sec

I't easy to see vhost-net brings great performance improvements,
almost 50%+.

Li.

>>
>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ___
>>>> kvmarm mailing list
>>>> kvm...@lists.cs.columbia.edu
>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>>
>>>
>>> regards,
>>> Nikolay Nikolaev
>>> Virtual Open Systems
> 
> .
> 




Re: [Qemu-devel] The status about vhost-net on kvm-arm?

2014-08-14 Thread Li Liu
Hi Ying-Shiuan Pan,

I don't know why for missing your mail in mailbox. Sorry about that.
The results of vhost-net performance have been attached in another mail.

Do you have a plan to renew your patchset to support irqfd. If not,
we will try to finish it based on yours.

On 2014/8/14 11:50, Li Liu wrote:
> 
> 
> On 2014/8/13 19:25, Nikolay Nikolaev wrote:
>> On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
>>  wrote:
>>> On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
>>>  wrote:
>>>>
>>>> Hello,
>>>>
>>>>
>>>> On Tue, Aug 12, 2014 at 5:41 AM, Li Liu  wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>>>>
>>>>> Half a year has passed from Isa Ansharullah asked this question:
>>>>> http://www.spinics.net/lists/kvm-arm/msg08152.html
>>>>>
>>>>> I have found two patches which have provided the kvm-arm support of
>>>>> eventfd and irqfd:
>>>>>
>>>>> 1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
>>>>> http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
>>>>>
>>>>> 2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
>>>>> https://patches.linaro.org/32261/
>>>>>
>>>>> And there's a rough patch for qemu to support eventfd from Ying-Shiuan 
>>>>> Pan:
>>>>>
>>>>> [Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
>>>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>>>>
>>>>> But there no any comments of this patch. And I can found nothing about 
>>>>> qemu
>>>>> to support irqfd. Do I lost the track?
>>>>>
>>>>> If nobody try to fix it. We have a plan to complete it about virtio-mmio
>>>>> supporing irqfd and multiqueue.
>>>>>
>>>>>
>>>>
>>>> we at Virtual Open Systems did some work and tested vhost-net on ARM
>>>> back in March.
>>>> The setup was based on:
>>>>  - host kernel with our ioeventfd patches:
>>>> http://www.spinics.net/lists/kvm-arm/msg08413.html
>>>>
>>>> - qemu with the aforementioned patches from Ying-Shiuan Pan
>>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>>>
>>>> The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
>>>> Ethernet adapter connected to a 1Gbps switch. I can't find the actual
>>>> numbers but I remember that with multiple streams the gain was clearly
>>>> seen. Note that it used the minimum required ioventfd implementation
>>>> and not irqfd.
>>>>
>>>> I guess it is feasible to think that it all can be put together and
>>>> rebased + the recent irqfd work. One can achiev even better
>>>> performance (because of the irqfd).
>>>>
>>>
>>> Managed to replicate the setup with the old versions e used in March:
>>>
>>> Single stream from another machine to chromebook with 1Gbps USB3
>>> Ethernet adapter.
>>> iperf -c  -P 1 -i 1 -p 5001 -f k -t 10
>>> to HOST: 858316 Kbits/sec
>>> to GUEST: 761563 Kbits/sec
>> to GUEST vhost=off: 508150 Kbits/sec
>>>
>>> 10 parallel streams
>>> iperf -c  -P 10 -i 1 -p 5001 -f k -t 10
>>> to HOST: 842420 Kbits/sec
>>> to GUEST: 625144 Kbits/sec
>> to GUEST vhost=off: 425276 Kbits/sec
> 
> I have tested the same cases on a Hisilicon board (Cortex-A15@1G)
> with Integrated 1Gbps Ethernet adapter.
> 
> iperf -c  -P 1 -i 1 -p 5001 -f M -t 10
> to HOST: 906 Mbits/sec
> to GUEST: 562 Mbits/sec
> to GUEST vhost=off: 340 Mbits/sec
> 
> 10 parallel streams, the performance gets <10% plus:
> iperf -c  -P 10 -i 1 -p 5001 -f M -t 10
> to HOST: 923 Mbits/sec
> to GUEST: 592 Mbits/sec
> to GUEST vhost=off: 364 Mbits/sec
> 
> I't easy to see vhost-net brings great performance improvements,
> almost 50%+.
> 
> Li.
> 
>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ___
>>>>> kvmarm mailing list
>>>>> kvm...@lists.cs.columbia.edu
>>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>>>
>>>>
>>>> regards,
>>>> Nikolay Nikolaev
>>>> Virtual Open Systems
>>
>> .
>>
> 
> 
> 
> .
> 




Re: [Qemu-devel] [PATCH 0/6] add dumpdts ability to convert dtb to dts

2014-08-25 Thread Li Liu


On 2014/8/25 20:22, Peter Maydell wrote:
> On 25 August 2014 05:00, john.liuli  wrote:
>> From: Li Liu 
>>
>> This patchset let qemu can convert dtb file to dts for two demands:
>>
>> Some archtectures may generate the dtb file dynamically through
>> qemu device tree functions. So this let it's possiable to dump final
>> dtb to dts and save it as a reference.
>>
>> For novices to debugging the issues caused by wrong dtb parameters.
>> It will be easy to check the dts directly without copying the
>> dtb which may be generated by 'dumpdtb' to the PC and dtc or fdtdump
>> it.
>>
>> The outputed dts format is compatile with 'dtc -I dtb -O dts xxx.dtb'.
>> There's a new parameter 'dumpdts' which is similar to 'dumpdtb'. so try
>> it like '-machine dumpdts=/tmp/xxx.dts'.
> 
> Hi. Thanks for this patchset, but I'm afraid this doesn't
> seem to me like something that should be in QEMU.
> As you say, you can easily turn the dtb blob into a source file
> with dtc. That gets you a definitely-correct disassembly of the
> blob, and we don't need to maintain a possibly-buggy
> reimplementation of the dtb disassembler in QEMU.
> 
> thanks
> -- PMM
> 

That makes sense. It's mostly used for debugging.

Best regards
Li.

> 




Re: [Qemu-devel] [PATCH] qemu-char: fix terminal crash when using "-monitor stdio -nographic"

2014-08-27 Thread Li Liu


On 2014/8/27 14:44, Markus Armbruster wrote:
> "john.liuli"  writes:
> 
>> From: Li Liu 
>>
>> Eeay to reproduce, just try "qemu -monitor stdio -nographic"
>> and type "quit", then the terminal will be crashed.
>>
>> There are two pathes try to call tcgetattr of stdio in vl.c:
>>
>> 1) Monitor_parse(optarg, "readline");
>>.
>>qemu_opts_foreach(qemu_find_opts("chardev"),
>>  chardev_init_func, NULL, 1) != 0)
>>
>> 2) if (default_serial)
>>add_device_config(DEV_SERIAL, "stdio");
>>
>>if (foreach_device_config(DEV_SERIAL, serial_parse) < 0)
>>
>> Both of them will trigger qemu_chr_open_stdio which will disable
>> ECHO attributes. First one has updated the attributes of stdio
>> by calling qemu_chr_fe_set_echo(chr, false). And the tty
>> attributes has been saved in oldtty. Then the second path will
>> redo such actions, and the oldtty is overlapped. So till "quit",
>> term_exit can't recove the correct attributes.
>>
>> Signed-off-by: Li Liu 
> 
> Yes, failure to restore tty settings is a bug.
> 
> But is having multiple character devices use the same terminal valid?

I'm not sure. But I have found such comments in vl.c
"According to documentation and historically, -nographic redirects
serial port, parallel port and monitor to stdio"

Best regards
Li.

> If no, can we catch and reject the attempt?
> 
> [...]
> 
> 




Re: [Qemu-devel] [PATCH v2 1/2] device_tree.c: redirect load_device_tree err message to stderr

2014-08-28 Thread Li Liu
Could this patchset apply to -trivial, thank you!

Best regards,
Li.

On 2014/8/26 14:38, john.liuli wrote:
> From: Li Liu 
> 
> Reviewed-by: Peter Crosthwaite 
> Signed-off-by: Li Liu 
> ---
> changes v1 -> v2:
> 1) fix indent issue as peter suggested.
> 2) dump all err mesages with error_report. 
> 
> ---
>  device_tree.c |   15 ---
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/device_tree.c b/device_tree.c
> index ca83504..9d47195 100644
> --- a/device_tree.c
> +++ b/device_tree.c
> @@ -20,6 +20,7 @@
>  
>  #include "config.h"
>  #include "qemu-common.h"
> +#include "qemu/error-report.h"
>  #include "sysemu/device_tree.h"
>  #include "sysemu/sysemu.h"
>  #include "hw/loader.h"
> @@ -79,8 +80,8 @@ void *load_device_tree(const char *filename_path, int 
> *sizep)
>  *sizep = 0;
>  dt_size = get_image_size(filename_path);
>  if (dt_size < 0) {
> -printf("Unable to get size of device tree file '%s'\n",
> -filename_path);
> +error_report("Unable to get size of device tree file '%s'",
> + filename_path);
>  goto fail;
>  }
>  
> @@ -92,21 +93,21 @@ void *load_device_tree(const char *filename_path, int 
> *sizep)
>  
>  dt_file_load_size = load_image(filename_path, fdt);
>  if (dt_file_load_size < 0) {
> -printf("Unable to open device tree file '%s'\n",
> -   filename_path);
> +error_report("Unable to open device tree file '%s'",
> + filename_path);
>  goto fail;
>  }
>  
>  ret = fdt_open_into(fdt, fdt, dt_size);
>  if (ret) {
> -printf("Unable to copy device tree in memory\n");
> +error_report("Unable to copy device tree in memory");
>  goto fail;
>  }
>  
>  /* Check sanity of device tree */
>  if (fdt_check_header(fdt)) {
> -printf ("Device tree file loaded into memory is invalid: %s\n",
> -filename_path);
> +error_report("Device tree file loaded into memory is invalid: %s",
> + filename_path);
>  goto fail;
>  }
>  *sizep = dt_size;
> 




Re: [Qemu-devel] The status about vhost-net on kvm-arm?

2014-10-17 Thread Li Liu


On 2014/10/15 22:39, GAUGUEY Rémy 228890 wrote:
> Hello,
> 
> Using this Qemu patchset as well as recent irqfd work, I’ve tried to make 
> vhost-net working on Cortex-A15.
> Unfortunately, even if I can correctly generate irqs to the guest through 
> irqfd, it seems to me that some pieces are still missing….
> Indeed, virtio mmio interrupt status register (@ offset 0x60) is not updated 
> by vhost thread, and reading it or writing to the peer interrupt ack register 
> (offset 0x64) from the guest causes an VM exit …
> 

Yeah, you are correct. But it's not far away from success if have injected irqs
to the guest through irqfd. Do below things to let guest receive packets
correctly without checking VIRTIO_MMIO_INTERRUPT_STATUS in guest virtio_mmio.c:

static irqreturn_t vm_interrupt(int irq, void *opaque)
{
..

/* Read and acknowledge interrupts */
/*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);

if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
&& vdrv && vdrv->config_changed) {
vdrv->config_changed(&vm_dev->vdev);
ret = IRQ_HANDLED;
}*/

//if (likely(status & VIRTIO_MMIO_INT_VRING)) {
spin_lock_irqsave(&vm_dev->lock, flags);
list_for_each_entry(info, &vm_dev->virtqueues, node)
ret |= vring_interrupt(irq, info->vq);
spin_unlock_irqrestore(&vm_dev->lock, flags);
//}

return ret;
}

This is very roughly :), and a lot of coding things need to be done.

Li.

> After reading older posts, I understand that vhost-net with irqfd support 
> could only work with MSI-X support :
> 
> On 01/20/2011 09:35 AM, Michael S. Tsirkin wrote:
> “When MSI is off, each interrupt needs to be bounced through the io thread 
> when it's set/cleared, so vhost-net causes more context switches and
> higher CPU utilization than userspace virtio which handles networking in the 
> same thread.
> “
> Indeed, in case of MSI-X support, Virtio spec indicates that the ISR Status 
> field is unused…
> 
> I understand that Vhost does not emulate a complete virtio PCI adapter but 
> only manage virtqueue operations.
> However I don’t have a clear view of what is performed by Qemu and what is 
> performed by vhost-thread…
> Could someone highlight me on this point, and maybe give some clues for an 
> implementation of Vhost with irqfd and without MSI support ???
> 
> Thanks a lot in advance.
> Best regards.
> Rémy
> 
> 
> 
> De : kvmarm-boun...@lists.cs.columbia.edu 
> [mailto:kvmarm-boun...@lists.cs.columbia.edu] De la part de Yingshiuan Pan
> Envoyé : vendredi 15 août 2014 09:25
> À : Li Liu
> Cc : kvm...@lists.cs.columbia.edu; k...@vger.kernel.org; qemu-devel
> Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?
> 
> Hi, Li,
> 
> It's ok, I did get those mails from mailing list. I guess it was because I 
> did not subscribe some of mailing lists.
> 
> Currently, I think I will not have any plan to renew my patcheset since I 
> have resigned from my previous company, I do not have Cortex-A15 platform to 
> test/verify.
> 
> I'm fine with that, it would be great if you or someone can take it and 
> improve it.
> Thanks.
> 
> 
> Best Regards,
> Yingshiuan Pan
> 
> 2014-08-15 11:04 GMT+08:00 Li Liu 
> mailto:john.li...@huawei.com>>:
> Hi Ying-Shiuan Pan,
> 
> I don't know why for missing your mail in mailbox. Sorry about that.
> The results of vhost-net performance have been attached in another mail.
> 
> Do you have a plan to renew your patchset to support irqfd. If not,
> we will try to finish it based on yours.
> 
> On 2014/8/14 11:50, Li Liu wrote:
>>
>>
>> On 2014/8/13 19:25, Nikolay Nikolaev wrote:
>>> On Wed, Aug 13, 2014 at 12:10 PM, Nikolay Nikolaev
>>> mailto:n.nikol...@virtualopensystems.com>>
>>>  wrote:
>>>> On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
>>>> mailto:n.nikol...@virtualopensystems.com>>
>>>>  wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>>
>>>>> On Tue, Aug 12, 2014 at 5:41 AM, Li Liu 
>>>>> mailto:john.li...@huawei.com>> wrote:
>>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>>>>>
>>>>>> Half a year has passed from Isa Ansharullah asked this question:
>>>>>> h

Re: [Qemu-devel] The status about vhost-net on kvm-arm?

2014-10-23 Thread Li Liu


On 2014/10/17 20:49, GAUGUEY Rémy 228890 wrote:
> Thanks for your feedback, 
> 
>> static irqreturn_t vm_interrupt(int irq, void *opaque) {
>>  ..
>>
>>  /* Read and acknowledge interrupts */
>>  /*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
>>  writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
>>
>>  if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
>>  && vdrv && vdrv->config_changed) {
>>  vdrv->config_changed(&vm_dev->vdev);
>>  ret = IRQ_HANDLED;
>>  }*/
>>
>>  //if (likely(status & VIRTIO_MMIO_INT_VRING)) {
>>  spin_lock_irqsave(&vm_dev->lock, flags);
>>  list_for_each_entry(info, &vm_dev->virtqueues, node)
>>  ret |= vring_interrupt(irq, info->vq);
>>  spin_unlock_irqrestore(&vm_dev->lock, flags);
>>  //}
>>
>>  return ret;
>> }
>>
>> This is very roughly :), and a lot of coding things need to be done.
> 
> I agree ;-)
> Anyway, with this "workaround" you disable the control plane interrupt, which 
> is needed to bring up/down the virtio link... unless VIRTIO_NET_F_STATUS 
> feature is off.
> I was thinking about connecting those 2 registers to an ioeventfd in order to 
> emulate them in Vhost and bypass Qemu... but AFAIK ioeventfd can only work 
> with "write" registers.
> Any idea for a long term solution ?
> 

Yes, how to emulate mis-x is the point and I also sleep on it. Anyone have good 
ideas?

Li.

> best regards.
> Rémy
> 
> -Message d'origine-
> De : Li Liu [mailto:john.li...@huawei.com] 
> Envoyé : vendredi 17 octobre 2014 14:27
> À : GAUGUEY Rémy 228890; Yingshiuan Pan
> Cc : kvm...@lists.cs.columbia.edu; k...@vger.kernel.org; qemu-devel
> Objet : Re: [Qemu-devel] The status about vhost-net on kvm-arm?
> 
> 
> 
> On 2014/10/15 22:39, GAUGUEY Rémy 228890 wrote:
>> Hello,
>>
>> Using this Qemu patchset as well as recent irqfd work, I’ve tried to make 
>> vhost-net working on Cortex-A15.
>> Unfortunately, even if I can correctly generate irqs to the guest through 
>> irqfd, it seems to me that some pieces are still missing….
>> Indeed, virtio mmio interrupt status register (@ offset 0x60) is not 
>> updated by vhost thread, and reading it or writing to the peer 
>> interrupt ack register (offset 0x64) from the guest causes an VM exit 
>> …
>>
> 
> Yeah, you are correct. But it's not far away from success if have injected 
> irqs to the guest through irqfd. Do below things to let guest receive packets 
> correctly without checking VIRTIO_MMIO_INTERRUPT_STATUS in guest 
> virtio_mmio.c:
> 
> static irqreturn_t vm_interrupt(int irq, void *opaque) {
>   ..
> 
>   /* Read and acknowledge interrupts */
>   /*status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
>   writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
> 
>   if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)
>   && vdrv && vdrv->config_changed) {
>   vdrv->config_changed(&vm_dev->vdev);
>   ret = IRQ_HANDLED;
>   }*/
> 
>   //if (likely(status & VIRTIO_MMIO_INT_VRING)) {
>   spin_lock_irqsave(&vm_dev->lock, flags);
>   list_for_each_entry(info, &vm_dev->virtqueues, node)
>   ret |= vring_interrupt(irq, info->vq);
>   spin_unlock_irqrestore(&vm_dev->lock, flags);
>   //}
> 
>   return ret;
> }
> 
> This is very roughly :), and a lot of coding things need to be done.
> 
> Li.
> 
>> After reading older posts, I understand that vhost-net with irqfd support 
>> could only work with MSI-X support :
>>
>> On 01/20/2011 09:35 AM, Michael S. Tsirkin wrote:
>> “When MSI is off, each interrupt needs to be bounced through the io 
>> thread when it's set/cleared, so vhost-net causes more context switches and 
>> higher CPU utilization than userspace virtio which handles networking in the 
>> same thread.
>> “
>> Indeed, in case of MSI-X support, Virtio spec indicates that the ISR 
>> Status field is unused…
>>
>> I understand that Vhost does not emulate a complete virtio PCI adapter but 
>> only manage virtqueue operations.
>> However I don’t have a clear view of what is performed by Qemu and 
>> what is performed by vhost-thread… Could someone highlight me on this point, 
>> and maybe 

Re: [Qemu-devel] [RFC PATCH 0/2] virtio-mmio: add irqfd support for vhost-net based on virtio-mmio

2014-10-27 Thread Li Liu


On 2014/10/26 19:52, Michael S. Tsirkin wrote:
> On Sat, Oct 25, 2014 at 04:24:52PM +0800, john.liuli wrote:
>> From: Li Liu 
>>
>> This set of patches try to implemet irqfd support of vhost-net 
>> based on virtio-mmio.
>>
>> I had posted a mail to talking about the status of vhost-net 
>> on kvm-arm refer to http://www.spinics.net/lists/kvm-arm/msg10804.html.
>> Some dependent patches are listed in the mail too. Basically the 
>> vhost-net brings great performance improvements, almost 50%+.
>>
>> It's easy to implement irqfd support with PCI MSI-X. But till 
>> now arm32 do not provide equivalent mechanism to let a device 
>> allocate multiple interrupts. And even the aarch64 provid LPI
>> but also not available in a short time.
>>
>> As Gauguey Remy said "Vhost does not emulate a complete virtio 
>> adapter but only manage virtqueue operations". Vhost module
>> don't update the ISR register, so if with only one irq then it's 
>> no way to get the interrupt reason even we can inject the 
>> irq correctly.  
> 
> Well guests don't read ISR in MSI-X mode so why does it help
> to set the ISR bit?

Yeah, vhost don't need to set ISR under MSI-X mode. But for ARM
without MSI-X kind mechanism guest can't get the interrupt reason
through the only one irq hanlder (with one gsi resource).

So I build a shared memory region to provide the interrupt reason
instead of ISR regiser by qemu without bothering vhost. Then even
there's only one irq with only one irq hanlder, it still can
distinguish why irq occur.

Li.

> 
>> To get the interrupt reason to support such VIRTIO_NET_F_STATUS 
>> features I add a new register offset VIRTIO_MMIO_ISRMEM which 
>> will help to establish a shared memory region between qemu and 
>> virtio-mmio device. Then the interrupt reason can be accessed by
>> guest driver through this region. At the same time, the virtio-mmio 
>> dirver check this region to see irqfd is supported or not during 
>> the irq handler registration, and different handler will be assigned.
>>
>> I want to know it's the right direction? Does it comply with the 
>> virtio-mmio spec.? Or anyone have more good ideas to emulate mis-x 
>> based on virtio-mmio? I hope to get feedback and guidance.
>> Thx for any help.
>>
>> Li Liu (2):
>>   Add a new register offset let interrupt reason available
>>   Assign a new irq handler while irqfd enabled
>>
>>  drivers/virtio/virtio_mmio.c |   55 
>> +++---
>>  include/linux/virtio_mmio.h  |3 +++
>>  2 files changed, 55 insertions(+), 3 deletions(-)
>>
>> -- 
>> 1.7.9.5
>>
> 
> .
> 




Re: [Qemu-devel] [RFC PATCH 2/2] Assign a new irq handler while irqfd enabled

2014-10-27 Thread Li Liu


On 2014/10/26 19:56, Michael S. Tsirkin wrote:
> On Sat, Oct 25, 2014 at 04:24:54PM +0800, john.liuli wrote:
>> From: Li Liu 
>>
>> This irq handler will get the interrupt reason from a
>> shared memory. And will be assigned only while irqfd
>> enabled.
>>
>> Signed-off-by: Li Liu 
>> ---
>>  drivers/virtio/virtio_mmio.c |   34 --
>>  1 file changed, 32 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
>> index 28ddb55..7229605 100644
>> --- a/drivers/virtio/virtio_mmio.c
>> +++ b/drivers/virtio/virtio_mmio.c
>> @@ -259,7 +259,31 @@ static irqreturn_t vm_interrupt(int irq, void *opaque)
>>  return ret;
>>  }
>>  
>> +/* Notify all virtqueues on an interrupt. */
>> +static irqreturn_t vm_interrupt_irqfd(int irq, void *opaque)
>> +{
>> +struct virtio_mmio_device *vm_dev = opaque;
>> +struct virtio_mmio_vq_info *info;
>> +unsigned long status;
>> +unsigned long flags;
>> +irqreturn_t ret = IRQ_NONE;
>>  
>> +/* Read the interrupt reason and reset it */
>> +status = *vm_dev->isr_mem;
>> +*vm_dev->isr_mem = 0x0;
> 
> you are reading and modifying shared memory
> without atomics and any memory barriers.
> Why is this safe?
> 

good catch, a stupid mistake.

>> +
>> +if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)) {
>> +virtio_config_changed(&vm_dev->vdev);
>> +ret = IRQ_HANDLED;
>> +}
>> +
>> +spin_lock_irqsave(&vm_dev->lock, flags);
>> +list_for_each_entry(info, &vm_dev->virtqueues, node)
>> +ret |= vring_interrupt(irq, info->vq);
>> +spin_unlock_irqrestore(&vm_dev->lock, flags);
>> +
>> +return ret;
>> +}
>>  
>>  static void vm_del_vq(struct virtqueue *vq)
>>  {
> 
> So you invoke callbacks for all VQs.
> This won't scale well as the number of VQs grows, will it?
> 
>> @@ -391,6 +415,7 @@ error_available:
>>  return ERR_PTR(err);
>>  }
>>  
>> +#define VIRTIO_MMIO_F_IRQFD(1 << 7)
>>  static int vm_find_vqs(struct virtio_device *vdev, unsigned nvqs,
>> struct virtqueue *vqs[],
>> vq_callback_t *callbacks[],
>> @@ -400,8 +425,13 @@ static int vm_find_vqs(struct virtio_device *vdev, 
>> unsigned nvqs,
>>  unsigned int irq = platform_get_irq(vm_dev->pdev, 0);
>>  int i, err;
>>  
>> -err = request_irq(irq, vm_interrupt, IRQF_SHARED,
>> -dev_name(&vdev->dev), vm_dev);
>> +if (*vm_dev->isr_mem & VIRTIO_MMIO_F_IRQFD) {
>> +err = request_irq(irq, vm_interrupt_irqfd, IRQF_SHARED,
>> +  dev_name(&vdev->dev), vm_dev);
>> +} else {
>> +err = request_irq(irq, vm_interrupt, IRQF_SHARED,
>> +  dev_name(&vdev->dev), vm_dev);
>> +}
>>  if (err)
>>  return err;
> 
> 
> So still a single interrupt for all VQs.
> Again this doesn't scale: a single CPU has to handle
> interrupts for all of them.
> I think you need to find a way to get per-VQ interrupts.

Yeah, AFAIK it's impossible to distribute works to different CPUs with
only one irq without MSI-X kind mechanism. Assign multiple gsis to one
device, obviously it's consumptive and not scalable. Any ideas? Thx.

> 
>> -- 
>> 1.7.9.5
>>
> 
> .
> 




Re: [Qemu-devel] [RFC PATCH 0/2] virtio-mmio: add irqfd support for vhost-net based on virtio-mmio

2014-10-27 Thread Li Liu


On 2014/10/27 17:37, Peter Maydell wrote:
> On 25 October 2014 09:24, john.liuli  wrote:
>> To get the interrupt reason to support such VIRTIO_NET_F_STATUS
>> features I add a new register offset VIRTIO_MMIO_ISRMEM which
>> will help to establish a shared memory region between qemu and
>> virtio-mmio device. Then the interrupt reason can be accessed by
>> guest driver through this region. At the same time, the virtio-mmio
>> dirver check this region to see irqfd is supported or not during
>> the irq handler registration, and different handler will be assigned.
> 
> If you want to add a new register you should probably propose
> an update to the virtio spec. However, it seems to me it would
> be better to get generic PCI/PCIe working on the ARM virt
> board instead; then we can let virtio-mmio quietly fade away.
> This has been on the todo list for ages (and there have been
> RFC patches posted for plain PCI), it's just nobody's had time
> to work on it.
> 
> thanks
> -- PMM
> 

So you mean virtio-mmio will be replaced by PCI/PCIe on ARM at last?
If so, let this patch go with the wind:). Thx.

Li.
> .
> 




Re: [Qemu-devel] [PATCH] numa: fix qerror_report_err not free issue

2014-08-29 Thread Li Liu


On 2014/8/30 13:26, Michael Tokarev wrote:
> 30.08.2014 07:36, john.liuli wrote:
>> From: Li Liu 
>>
>> All qerror_report_err returned none NULL pointers need to
>> be freed, otherwise will cause memory leaking.
>>
>> Although this place did not cause real memory leaking by exit,
>> obviously it's not correct to use qerror_report_err
>> without error_free it.
> 
> I don't thing there's any good reason to free resources like
> this (freeing memory, closing files, etc) right before exit()
> (esp. in error path).  The OS will do that for us in one go
> much faster.
> 

Yes, the OS will do that for us. But if someone refer to this code
to use qerror_report_err, it will give a bad hint. Of cause this
patch is just a suggestion.

Best regards
Li.

> /mjt
> 
> 




Re: [Qemu-devel] [PATCH] qemu-char: fix terminal crash when using "-monitor stdio -nographic"

2014-09-04 Thread Li Liu
Ping, any more comments? Thanks.

On 2014/8/27 15:40, Li Liu wrote:
> 
> 
> On 2014/8/27 14:44, Markus Armbruster wrote:
>> "john.liuli"  writes:
>>
>>> From: Li Liu 
>>>
>>> Eeay to reproduce, just try "qemu -monitor stdio -nographic"
>>> and type "quit", then the terminal will be crashed.
>>>
>>> There are two pathes try to call tcgetattr of stdio in vl.c:
>>>
>>> 1) Monitor_parse(optarg, "readline");
>>>.
>>>qemu_opts_foreach(qemu_find_opts("chardev"),
>>>  chardev_init_func, NULL, 1) != 0)
>>>
>>> 2) if (default_serial)
>>>add_device_config(DEV_SERIAL, "stdio");
>>>
>>>if (foreach_device_config(DEV_SERIAL, serial_parse) < 0)
>>>
>>> Both of them will trigger qemu_chr_open_stdio which will disable
>>> ECHO attributes. First one has updated the attributes of stdio
>>> by calling qemu_chr_fe_set_echo(chr, false). And the tty
>>> attributes has been saved in oldtty. Then the second path will
>>> redo such actions, and the oldtty is overlapped. So till "quit",
>>> term_exit can't recove the correct attributes.
>>>
>>> Signed-off-by: Li Liu 
>>
>> Yes, failure to restore tty settings is a bug.
>>
>> But is having multiple character devices use the same terminal valid?
> 
> I'm not sure. But I have found such comments in vl.c
> "According to documentation and historically, -nographic redirects
> serial port, parallel port and monitor to stdio"
> 
> Best regards
> Li.
> 
>> If no, can we catch and reject the attempt?
>>
>> [...]
>>
>>
> 
> 
> 
> 




Re: [Qemu-devel] [PATCH] qemu-char: fix terminal crash when using "-monitor stdio -nographic"

2014-09-08 Thread Li Liu


On 2014/9/5 17:31, Gerd Hoffmann wrote:
> On Fr, 2014-09-05 at 11:04 +0200, Markus Armbruster wrote:
>> Li Liu  writes:
>>
>>> Ping, any more comments? Thanks.
>>
>> I'd like to hear Gerd's opinion (cc'ed).
>>
>>>>> But is having multiple character devices use the same terminal valid?
> 
> No (guess we should catch that case in stdio init).
> 
> Beside the tty initialization and cleanup you also have the problem that
> both users are racing for input.  Well, maybe not in the qemu case as it
> is the same process and it very well might be that it polls the two
> chardevs in a well defined order, so one of them gets all input and the
> other gets nothing.  With two processes reading from the terminal (try
> 'cat | less') it is actually random though.
> 
>>>> I'm not sure. But I have found such comments in vl.c
>>>> "According to documentation and historically, -nographic redirects
>>>> serial port, parallel port and monitor to stdio"
> 
> In that case mux chardev is used (that is the piece which handles the
> input switching between serial and monitor via 'Ctrl-A c').  There is
> one stdio instance, and one mux instance, the mux is chained to stdio,
> and mux allows multiple backends to connect.
> 
> You can construct it on the command line this way:
> 
> qemu -nographic -nodefaults \
>-chardev stdio,mux=on,id=terminal \
>-serial chardev:terminal \
>-monitor chardev:terminal
> 
> [ serial is default, so no output here, unless you boot a guest
>   with serial console configured ]
> 
> [ Hit 'Ctrl-A h' now ]
> 
> C-a hprint this help
> C-a xexit emulator
> C-a ssave disk data back to file (if -snapshot)
> C-a ttoggle console timestamps
> C-a bsend break (magic sysrq)
> C-a cswitch between console and monitor
> C-a C-a  sends C-a
> 
> [ Hit 'Ctrl-A c' now ]
> 
> QEMU 2.1.50 monitor - type 'help' for more information
> (qemu) info chardev
> terminal: filename=mux
> terminal-base: filename=stdio
> (qemu) 
> 
> HTH,
>   Gerd
> 

Appreciate your detailed answer. Thank you very much.

Li.

> 
> 
> .
> 




Re: [Qemu-devel] [PATCH 1/2] vl: Fix the confused logic for '-m' option

2014-09-11 Thread Li Liu


On 2014/9/12 13:58, zhanghailiang wrote:
> It should be valid for the follow configure:
> -m 256,slots=0
> -m 256,maxmem=256M
> -m 256,slots=0,maxmem=256M
> -m 256,slots=x,maxmem=y  where x > 0 and y > 256M
> 
> Fix the confused code logic and use error_report instead of fprintf.
> 
> Printing the maxmem in hex, same with ram_size.
> 
> Signed-off-by: zhanghailiang 
> ---
>  vl.c | 46 +++---
>  1 file changed, 27 insertions(+), 19 deletions(-)
> 
> diff --git a/vl.c b/vl.c
> index 9c9acf5..f547405 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -3306,6 +3306,7 @@ int main(int argc, char **argv, char **envp)
>  break;
>  case QEMU_OPTION_m: {
>  uint64_t sz;
> +uint64_t slots;
>  const char *mem_str;
>  const char *maxmem_str, *slots_str;
>  
> @@ -3353,40 +3354,47 @@ int main(int argc, char **argv, char **envp)
>  
>  maxmem_str = qemu_opt_get(opts, "maxmem");
>  slots_str = qemu_opt_get(opts, "slots");
> -if (maxmem_str && slots_str) {
> -uint64_t slots;
> -
> +if (maxmem_str) {
>  sz = qemu_opt_get_size(opts, "maxmem", 0);
> +}
> +if (slots_str) {
> +slots = qemu_opt_get_number(opts, "slots", 0);
> +}
> +if (maxmem_str && slots_str) {
>  if (sz < ram_size) {
> -fprintf(stderr, "qemu: invalid -m option value: 
> maxmem "
> -"(%" PRIu64 ") <= initial memory ("
> +error_report("qemu: invalid -m option value: maxmem "
> +"(%" PRIx64 ") < initial memory ("
>  RAM_ADDR_FMT ")\n", sz, ram_size);

error_report will add a '\n' automatically. Below lines have the same issue.

>  exit(EXIT_FAILURE);
>  }
> -
> -slots = qemu_opt_get_number(opts, "slots", 0);
> -if ((sz > ram_size) && !slots) {
> -fprintf(stderr, "qemu: invalid -m option value: 
> maxmem "
> -"(%" PRIu64 ") more than initial memory ("
> +if (!slots && (sz != ram_size)) {
> +error_report("qemu: invalid -m option value: maxmem "
> +"(%" PRIx64 ") more than initial memory ("
>  RAM_ADDR_FMT ") but no hotplug slots where "
>  "specified\n", sz, ram_size);
>  exit(EXIT_FAILURE);
>  }
> -
> -if ((sz <= ram_size) && slots) {
> -fprintf(stderr, "qemu: invalid -m option value:  %"
> +if (slots && (sz == ram_size)) {
> +error_report("qemu: invalid -m option value:  %"
>  PRIu64 " hotplug slots where specified but "
> -"maxmem (%" PRIu64 ") <= initial memory ("
> +"maxmem (%" PRIx64 ") = initial memory ("
>  RAM_ADDR_FMT ")\n", slots, sz, ram_size);
>  exit(EXIT_FAILURE);
>  }
>  maxram_size = sz;
>  ram_slots = slots;
> -} else if ((!maxmem_str && slots_str) ||
> -   (maxmem_str && !slots_str)) {
> -fprintf(stderr, "qemu: invalid -m option value: missing "
> -"'%s' option\n", slots_str ? "maxmem" : "slots");
> -exit(EXIT_FAILURE);
> +} else if (!maxmem_str && slots_str) {
> +if (slots > 0) {
> +error_report("qemu: invalid -m option value: missing 
> "
> +"'maxmem' option\n");
> +exit(EXIT_FAILURE);
> +}
> +} else if (maxmem_str && !slots_str) {
> +if (sz != ram_size) {
> +error_report("qemu: invalid -m option value: missing 
> "
> +"'slot' option\n");
> +exit(EXIT_FAILURE);
> +}
>  }
>  break;
>  }
> 




Re: [Qemu-devel] [Qemu-trivial] [PATCH v2] qemu-char: fix terminal crash when using "-monitor stdio -nographic"

2014-09-15 Thread Li Liu


On 2014/9/15 20:57, Michael Tokarev wrote:
> 15.09.2014 16:50, Michael Tokarev пишет:
>> 09.09.2014 15:19, john.liuli wrote:
>>> From: Li Liu 
>>>
>>> Eeay to reproduce, just try "qemu -monitor stdio -nographic"
>>> and type "quit", then the terminal will be crashed.
>>>
>>> There are two pathes try to call tcgetattr of stdio in vl.c:
>>
>> This looks reasonable.  Except of one thing -- how about renaming
>> stdio_is_ready to stdio_in_use?  (I can do that when applying, no
>> need to resend anythnig).  Because, well, stdio_is_ready is not
>> obvious at all, at least to me... :)
> 
> And oh, the commit comment -- it is not 'terminal crash', it is
> 'terminal misbehavor' or something like that.  Terminal does not
> crash, it just does not have proper settings after qemu exits.
> 
> /mjt
> 

Thanks four your comments! Can you help do this when applying
incidentally? Or I will resend it again?

Best Regards,

Li.

> .
> 




Re: [Qemu-devel] [RFC PATCH 0/2] virtio-mmio: add irqfd support for vhost-net based on virtio-mmio

2014-11-06 Thread Li Liu


On 2014/11/6 9:59, Shannon Zhao wrote:
> 
> 
> On 2014/11/5 16:43, Eric Auger wrote:
>> On 10/27/2014 12:23 PM, Li Liu wrote:
>>>
>>>
>>> On 2014/10/27 17:37, Peter Maydell wrote:
>>>> On 25 October 2014 09:24, john.liuli  wrote:
>>>>> To get the interrupt reason to support such VIRTIO_NET_F_STATUS
>>>>> features I add a new register offset VIRTIO_MMIO_ISRMEM which
>>>>> will help to establish a shared memory region between qemu and
>>>>> virtio-mmio device. Then the interrupt reason can be accessed by
>>>>> guest driver through this region. At the same time, the virtio-mmio
>>>>> dirver check this region to see irqfd is supported or not during
>>>>> the irq handler registration, and different handler will be assigned.
>>>>
>>>> If you want to add a new register you should probably propose
>>>> an update to the virtio spec. However, it seems to me it would
>>>> be better to get generic PCI/PCIe working on the ARM virt
>>>> board instead; then we can let virtio-mmio quietly fade away.
>>>> This has been on the todo list for ages (and there have been
>>>> RFC patches posted for plain PCI), it's just nobody's had time
>>>> to work on it.
>>>>
>>>> thanks
>>>> -- PMM
>>>>
>>>
>>> So you mean virtio-mmio will be replaced by PCI/PCIe on ARM at last?
>>> If so, let this patch go with the wind:). Thx.
>>
>> Hi,
>>
>> As a fix of current situation where ISR is only partially updated when
>> vhost-irqfd handles standard IRQ and waiting for PCI emuluation,
>> wouldn't it make sense to store ISR content on vhost driver side and
>> introduce ioctls to read/write it. When using vhost BE, virtio QEMU
>> device would use those ioctl to read/update the ISR content. On top of
>> that we would update the ISR in vhost before triggering the irqfd. If I
>> do not miss anything this would at least make things functional with irqfd.
>>
>> As a second step, we could try to introduce in-kernel emulation of
>> ISR/ACK to fix the performance issue related to going to user-side each
>> time ISR/ACK accesses are done.
>>
>> Do you think it is worth investigating this direction?
>>
> Hi,
> 
> About this problem I have a talk with Li Liu. As MST said, we could use
> multiple GSI to support vhost-net with irqfd. And we have figured out a way
> to solve this problem. The method is as same as virtio-pci which is to assign
> multiple irqs for virtio-mmio. Also it can support multiqueue virtio-net on 
> arm.
> 
> Would you have a look at this method? Thank you very much.
> 
> - virtio-mmio: support for multiple irqs
> http://www.spinics.net/lists/kernel/msg1858860.html
> 
> Thanks,
> Shannon
> 

Yeah, I think multiple GSI is more compatible with MSI-X. And even virtio-mmio
will fade away at last. It still make senses for ARM32 which can't support 
PCI/PCIe.

BTW, this patch is handed over to Shannon and please refer to new patch at
http://www.spinics.net/lists/kernel/msg1858860.html.

Li.

>> Thank you in advance
>>
>> Best Regards
>>
>> Eric
>>
>>
>>>
>>> Li.
>>>> .
>>>>
> 
> 
> .
>