d setting of them.
>
> XEN_SMAP being wrong post-boot is a problem specifically for live
> patching, as a live patch may need alternative instruction patching
> keyed off of that feature flag.
>
> Reported-by: Sarah Newman
> Signed-off-by: Jan Beulich
Reported-by/Tested-
On 06/20/2017 01:24 AM, Jan Beulich wrote:
On 20.06.17 at 01:39, wrote:
>> I have gotten messages like this sporadically in the qemu-dm log for stub
>> domains, both at domain start and domain reboot:
>>
>> evtchn_open() -> 7
>> ERROR: bind_interdomain failed with rc=-22xenevtchn_bind_interd
I have gotten messages like this sporadically in the qemu-dm log for stub
domains, both at domain start and domain reboot:
evtchn_open() -> 7
ERROR: bind_interdomain failed with rc=-22xenevtchn_bind_interdomain(121, 0) =
-22
bind interdomain ioctl error 22
Unable to find x86 CPU definition
close
On 06/13/2017 10:28 AM, Sarah Newman wrote:
> On 06/13/2017 10:08 AM, Wei Liu wrote:
>> On Tue, Jun 13, 2017 at 05:56:26PM +0100, Wei Liu wrote:
>>> On Tue, Jun 13, 2017 at 09:29:22AM -0700, Sarah Newman wrote:
>>>> Hi,
>>>>
>>>> With xen
On 06/13/2017 10:08 AM, Wei Liu wrote:
> On Tue, Jun 13, 2017 at 05:56:26PM +0100, Wei Liu wrote:
>> On Tue, Jun 13, 2017 at 09:29:22AM -0700, Sarah Newman wrote:
>>> Hi,
>>>
>>> With xen 4.8.1, I got the error message:
>>>
>>> libxl: err
Hi,
With xen 4.8.1, I got the error message:
libxl: error: libxl_dom.c:60:libxl__domain_cpupool: got info for dom2098,
wanted dom2097
: No such file or directory
This was while creating an HVM domain with a stub domain, probably concurrent
to creating a PV domain. The domains were created as 2
Has anyone tried to generate a live patch for xsa213 against 4.8? When I try to
do so I get errors for common/compat/compat/multicall.o and
xen/common/multicall.o stating that 'changed section .discard not selected for
inclusion'.
I think, but could be mistaken, that the .discard section is not
On 11/26/2016 05:14 PM, Dario Faggioli wrote:
> On Tue, 2016-11-22 at 22:40 +0100, Dario Faggioli wrote:
>> On Tue, 2016-11-22 at 11:37 -0800, Sarah Newman wrote:
>>> If you're saying not specifying "cpus=..." will keep libxl from
>>> interfering with the
On 11/22/2016 10:46 AM, Dario Faggioli wrote:
> On Mon, 2016-11-21 at 13:06 -0800, Sarah Newman wrote:
>> On 11/21/2016 11:37 AM, Sarah Newman wrote:
>>>
>>> If that's the reason not all the higher memory is being used first: is a
>>> potential worka
On 11/21/2016 11:37 AM, Sarah Newman wrote:
> On 11/21/2016 05:21 AM, Andrew Cooper wrote:
>> On 21/11/16 10:05, Jan Beulich wrote:
>
>>>>> Back in the xend days someone here had invented a (crude) mechanism
>>>>> to set aside memory for 32-bit
On 11/21/2016 05:21 AM, Andrew Cooper wrote:
> On 21/11/16 10:05, Jan Beulich wrote:
Back in the xend days someone here had invented a (crude) mechanism
to set aside memory for 32-bit PV domains, but I don't think dealing with
this situation in xl has ever seen any interest.
>>> If
On 11/21/2016 12:20 AM, Jan Beulich wrote:
On 19.11.16 at 22:22, wrote:
>> My current understanding is that on a server with more than 168GiB
>> of memory, I should still be able to around 128GiB of 32-bit PV
>> domUs, regardless of what order the domUs are started in.
>
> You don't clarify
Last night on a 288GiB server with less than 64GiB of 32 bit
domUs, we used the standard xendomains script which starts VMs
in alphabetical order.
Some 32 bit domUs at the very end were unable to start. The
error message we received is the following:
xc: error: panic: xc_dom_x86.c:944: arch_setup
On 03/25/2016 11:33 AM, Samuel Thibault wrote:
>> On Wed, Mar 23, 2016 at 02:26:51PM -0700, Sarah Newman wrote:
>>> 7c8f3483 introduced a break within a loop in netfront.c such that
>>> cons and nr_consumed were no longer always being incremented. The
>>> offset a
On 03/25/2016 12:32 PM, Samuel Thibault wrote:
> Sarah Newman, on Fri 25 Mar 2016 12:19:23 -0700, wrote:
>> I have no objections to backing out additional changes made in 7c8f3483,
>
> ? My patch doesn't really back out more than what you proposed actually.
It also backs
On 03/25/2016 11:33 AM, Samuel Thibault wrote:
> Wei Liu, on Fri 25 Mar 2016 13:09:07 +, wrote:
>> CC Samuel
>
> Thanks.
>
>> On Wed, Mar 23, 2016 at 02:26:51PM -0700, Sarah Newman wrote:
>>> 7c8f3483 introduced a break within a loop in netfront.c such that
On 03/24/2016 02:55 AM, George Dunlap wrote:
> On Wed, Mar 23, 2016 at 9:46 PM, Sarah Newman wrote:
>> On 03/22/2016 11:03 PM, Sarah Newman wrote:
>>> And nested xen.
>>>
>>> CPU: AMD Opteron 2352
>>> Outer configuration: Xen4CentOS 6 xen 4.6.1-2.el
On 03/23/2016 02:46 PM, Sarah Newman wrote:
> On 03/22/2016 11:03 PM, Sarah Newman wrote:
>> And nested xen.
>>
>> CPU: AMD Opteron 2352
>> Outer configuration: Xen4CentOS 6 xen 4.6.1-2.el6, linux
>> 3.18.25-18.el6.x86_64
>> Inner configuration: Xen4CentOS
On 03/22/2016 11:03 PM, Sarah Newman wrote:
> And nested xen.
>
> CPU: AMD Opteron 2352
> Outer configuration: Xen4CentOS 6 xen 4.6.1-2.el6, linux 3.18.25-18.el6.x86_64
> Inner configuration: Xen4CentOS 6 xen 4.6.1-2.el6, linux 3.18.25-19.el6.x86_64
> Inner xen command line:
7c8f3483 introduced a break within a loop in netfront.c such that
cons and nr_consumed were no longer always being incremented. The
offset at cons will be processed multiple times with the break in
place.
Remove the break and re-add "some !=0" in the loop for HAVE_LIBC.
Signed-off
And nested xen.
CPU: AMD Opteron 2352
Outer configuration: Xen4CentOS 6 xen 4.6.1-2.el6, linux 3.18.25-18.el6.x86_64
Inner configuration: Xen4CentOS 6 xen 4.6.1-2.el6, linux 3.18.25-19.el6.x86_64
Inner xen command line: cpuinfo loglvl=all guest_loglvl=error
dom0_mem=512M,max:512M com1=115200,8n1
On 01/25/2016 02:47 AM, David Vrabel wrote:
> On 23/01/16 22:12, Sarah Newman wrote:
>> Greetings,
>>
>> We are having problems related to mptsas and "swiotlb buffer is full" with
>> the Xen4CentOS kernel (3.18). It looks like the last re
Greetings,
We are having problems related to mptsas and "swiotlb buffer is full" with the
Xen4CentOS kernel (3.18). It looks like the last related work was
the series
http://lists.xenproject.org/archives/html/xen-devel/2014-12/msg00770.html back
at the end of 2014 and I'm wondering if there are
I saw xen hang after
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(
The line that's supposed to be there is
(XEN) Brought up 24 CPUs
After power cycling I went into the BIOS. In the BIOS, C-STATE was disabled. I
changed it to
* Intel(R) C-STATE tech [Enabled]
* C3 State
On 10/05/2015 10:18 PM, Andy Smith wrote:
> But again as I say, that article I posted earlier contains a bunch
> of smart crypto people saying that all of this is unnecessary. So
> should we be enabling it?
Even if only urandom is considered necessary, how is the initial seed for
urandom being g
On 10/05/2015 09:29 PM, Andy Smith wrote:
> I don't find it a problem as:
>
> - Your typical EntropyKey or OneRNG can generate quite a bit of
> entropy. Maybe 32 kilobytes per second for ~$50 each.
>
> - You can access them over the network so no USB passthrough needed.
Yes, I'm implementing
On 10/05/2015 08:35 PM, Andy Smith wrote:
> So, I've been keeping (PV) domUs topped up with entropy by giving
> them access to hardware RNGs (initially Entropy Keys, but since the
> company making them failed I've switched to OneRNGs).
This is not a satisfactory solution for us because even if we
Greetings,
We would like to use something like virtio-rng
http://wiki.qemu-project.org/Features-Done/VirtIORNG with PVM domUs and since
the wiki page on virtio
http://wiki.xen.org/wiki/Virtio_On_Xen says the wiki page is out of date, what
is the current status?
Would a native xen driver be lik
28 matches
Mail list logo