> From a brief look it looks like this would be doable, but the way
these flags are being communicated is rather ugly (the values used here
> aren't part of the public interface, and hence it wasn't immediately
> clear whether using one of the unused bits would be an option, but
> it looks like
- Make QEMU setup the vectors when the table entries are unmasked,
even if MSIX is not enabled.
- Provide an hypercall so QEMU can unmask MSIX vectors on behalf of
the guest. This would be used to unmask the entries if MSIX is
enabled with table entries already unmasked.
Neither sounds
On Tue, 15 Aug 2017 11:55:10 +0200, Roger Pau Monné
wrote:
Could you please try the patch below and paste the output you get on
the Xen console?
Output is in attached file. Does it help?
Regards Andreas(XEN) MSIX ctrl write. Enabled: 0 Maskall: 0. Configured entries:
(XEN) MSIX ctrl write. E
On Mon, 14 Aug 2017 13:56:58 +0200, Roger Pau Monné
wrote:
> I defined XEN_PT_LOGGING_ENABLED in xen_pt.h as requested without the
> "hack" patch. Log is attached. Does it help?
It tells me that there's nothing unexpected on that side. As I think I
had indicated before, we really need to see bo
On Mon, 31 Jul 2017 12:12:45 +0200, Jan Beulich wrote:
"Andreas Kinzler" 07/17/17 6:32 PM >>>
Jan, I still have access to the hardware so perhaps we can finally
solve
this problem.
Feel free to go ahead; I'll be on vacation for the next three weeks.
Perhaps we can
On Thu, 27 Jul 2017 18:49:47 +0200, George Dunlap
wrote:
Sorry, I think that this patch is just far to complicated. If you
really want to keep the "iptables is working check" (lines 1-7 of
function handle_iptable) then you should just move it inside the
claim_lock "iptables" section and yo
On Thu, 27 Jul 2017 12:55:14 +0200, George Dunlap
wrote:
For 4.9 we checked in a fix to this problem that would specifically
attempt to use the -w option if it was available; see c/s 3d2010f9ff.
Sorry, I think that this patch is just far to complicated. If you really
want to keep the "iptab
Hello Jan, Pasi, all
Jan, I still have access to the hardware so perhaps we can finally solve
this problem.
Feel free to go ahead; I'll be on vacation for the next three weeks.
Perhaps we can shortcut debugging a bit because I looked through the
patches of XenServer 7.2 and found the attach
Hello,
I noticed that PCI passthrough for an LSI SAS HBA 9211 did not longer work (at
least under Windows) when using Xen 4.8.1.
I then bisected through various released versions and finally I narrowed it
down to
4.5.5 (with qemu from Xen 4.6.5) -> working
4.6.0-rc1 (with qemu from Xen 4.6.5)
Hello
in /etc/xen/scripts/vif-common.sh there is a function handle_iptable. At its start there
is a check for a working iptables implementation. This check is outside the iptables lock
section (claim_lock "iptables") and even if it is only a read-only operation
the underlying iptables operatio
Hello
in /etc/xen/scripts/vif-common.sh there is a function handle_iptable. At its start there
is a check for a working iptables implementation. This check is outside the iptables lock
section (claim_lock "iptables") and even if it is only a read-only operation
the underlying iptables operatio
Somehow I cannot find *recent* information on dom0 kernels in the wiki.
Which dom0 kernels do the Xen developers use for test/production? Are
there recommended versions (for example 4.4) for production?
Are are any dependencies/incompatibilities between Xen versions and dom0
kernels (I am only
On 17.05.2016 17:34, Jan Beulich wrote:
We use xenified kernels based on kernel 3.4 for years and benchmarks
showed that they are faster than the pvops (vanilla) kernels.
But what is the current state in terms of performance and features?
I'm not sure what you expect here. Up to openSUSE 42.1 an
Hello Jan,
perhaps you can shed some light on the state of the xenfied SUSE kernels
(http://kernel.opensuse.org/cgit/kernel-source).
We use xenified kernels based on kernel 3.4 for years and benchmarks
showed that they are faster than the pvops (vanilla) kernels.
But what is the current stat
Is this still current?
I made an interesting observation: I had no problems with SPICE and
vanilla Xen 4.5.1 when using it on Gentoo with glibc 2.19/gcc 4.6.4.
Segfaults started when I switched to glibc 2.20/gcc 4.9.3 - I did not
change Xen source code at all.
All this might be related to:
htt
Is this still current? I made an interesting observation:
I had no problems with SPICE and vanilla Xen 4.5.1 when using it on
Gentoo with glibc 2.19/gcc 4.6.4.
Segfaults started when I switched to glibc 2.20/gcc 4.9.3 - I did not
change Xen source code at all.
All this might be related to:
htt
I am currently validating 4.5.1-rc1 as a stable platform for production
environments. I perform a series of tests which stress the IO subsystems
(net+disk) to the max. For block IO I reach more than 1.5 gigabytes/sec.
The tests also hash (SHA1) all IO and verify it against known values so
I can
On 19.02.2015 12:20, Andrew Cooper wrote:
Is it perhaps
http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f1e0df14412ccc6933a68eda66ac5b7d89186b62
There are a number of correctness fixes in that range which will
adversely affect performance.
I now reverted:
http://xenbits.xen.org/gitwe
Hello Xen developers,
since we use Xen for our production systems, I run many tests on Xen
(stability/performance). One test now uncovered a serious performance
regression when updating from Xen 4.2.3 to 4.2.x (with x>=4). To
reproduce run a domU (HVM) and compile a kernel for example ("time m
19 matches
Mail list logo