Hi, all:
I am trying to improve the performance of netfront/netback, and I found
that there were some discussion about PV network performance improvement in
devel mailing list ([1]). The proposals mentioned in [1] are helpful, such as
multipage ring, multiqueue, etc, and some of them have
g ”xl debug-keys", "xen-hvmctx", etc, but no help.
Thanks.
---
Best Regards
Openlui
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
omains will be blocked after sending
several packets from domu. However, which is different from the last mail is
that there are not any exception logs in driver domain.
I think maybe it is a bug for xen 4.5 + pci passthrough, could anybody give
me some advice about how to debug and solve i
IC as driver domain, can VMDQ still be supported?
Thanks.
--
openlui
Best Regards___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
At 2015-03-20 00:48:01, "Zoltan Kiss" wrote:
>
>
>On 19/03/15 03:40, openlui wrote:
>> Hi, all:
>>
>> I am trying to use a HVM with PCI pass-through NIC as network driver domain.
>> However, when I send packets whose size are larger than 128 bytes from
0 eth14: <--- start TBDC dump --->
...
The difference of lspci command results before and after the network is hung
show that the Status field changed from "MAbort-" to "MAbort+":
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR-
The network can not be recovered even after we reload the bnx2 module in driver
domain.
--
openlui
Best Regards
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Hi:
I want to enable qemu-dm's tracing in XEN. I have built qemu with "make
debug=y tools" command, and found that during building, qemu is configured with
"trace_backend=stderr" option. However, there aren't any trace log in
/var/log/qemu/qemu-dm-{domain_name}.log. I have also tried to cal
At 2015-03-05 19:09:41, "Tim Deegan" wrote:
>Hi,
>
>At 10:54 +0800 on 05 Mar (1425549262), openlui wrote:
>> 2. From the trace info and qemu-dm's log, it seems that it is "GPA"
>> (Guest Physical Address) instead of "MFN" in the IOREQ'
Hi, all:
I want to learn how the emulated NICs work in XEN. So I boot a DomU with an
emulated rtl8139 NIC, ping host from DomU and capture the trace info using
xentrace tool, and then check the log of qemu-dm and trace info analyzed by
xenalyze tool. I have enabled debug in rtl8139.c and added
At 2015-02-27 19:30:20, "David Vrabel" wrote:
>On 27/02/15 10:59, Wei Liu wrote:
>>
>> Persistent grant is not silver bullet. There is email thread on the
>> list discussing whether it should be removed in block driver.
>
>Persistent grants for to-guest network traffic is a flawed idea. It
>eith
At 2015-02-27 18:59:52, "Wei Liu" wrote:
>Cc'ing David (XenServer kernel maintainer)
>
>On Fri, Feb 27, 2015 at 05:21:11PM +0800, openlui wrote:
>> >On Mon, Dec 08, 2014 at 01:08:18PM +, Zhangleiqiang (Trump) wrote:
>> >> > On Mon, Dec 08, 2014
>On Mon, Dec 08, 2014 at 01:08:18PM +, Zhangleiqiang (Trump) wrote:
>> > On Mon, Dec 08, 2014 at 06:44:26AM +, Zhangleiqiang (Trump) wrote:
>> > > > On Fri, Dec 05, 2014 at 01:17:16AM +, Zhangleiqiang (Trump) wrote:
>> > > > [...]
>> > > > > > I think that's expected, because guest RX d
Hi, all:
I have tried PCI-Passthrough to DomU in Xen. However, If we send packets to
DomU for a while, there is chance that the networking of Domu will be
disconnected. The corresponding syslog messages are as shown at the end of mail.
And I found the following analysis at [1], I am wonder
Hi,
By using "xl debug-keys" method or "xentrace" command we can dump detailed
info of physical irq binding and handling, and in dumped dmesg info, many
lines will contain a "pirq" field. After looking up the code, I find the "pirq"
is the value of "u.pirq.irq" field in struct evtchn.
Ho
Hi, all:
I want to compile the Dom0 based on Linux 3.17.4 kernel, and make sure all
of that the necessary config options for supporting Dom0 on official wiki ([1])
enabled. However, if I test the "network forwarding" performance for Dom0 using
traffic generator, I find that the performance
Thanks for you relay, Konrad.
At 2015-01-23 22:29:48, "Konrad Rzeszutek Wilk" wrote:
>On Fri, Jan 23, 2015 at 03:54:07PM +0800, openlui wrote:
>> Hi, all:
>> From the article [1] and xen-colors picture from Brendan Gergg's blog
>> [2], I have some under
Hi, all:
From the article [1] and xen-colors picture from Brendan Gergg's blog [2],
I have some understanding and question about PVH Dom0 as follows:
1. Even after pvops has been support in Linux mainline kernel, current XEN
dom0 is still a "Full PV" domain (I will call it as "normal Dom
17 matches
Mail list logo