On Thu, Aug 26, 2021 at 05:57:08PM -0700, Sean Christopherson wrote:
> Use a per-CPU pointer to track perf's guest callbacks so that KVM can set
> the callbacks more precisely and avoid a lurking NULL pointer dereference.
I'm completely failing to see how per-cpu helps anything here...
> On x86,
On Thu, Aug 26, 2021 at 05:57:09PM -0700, Sean Christopherson wrote:
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 9bc1375d6ed9..2f28d9d8dc94 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -6485,6 +6485,18 @@ static void perf_pending_event(struct irq_work
On Thu, Aug 26, 2021 at 05:57:10PM -0700, Sean Christopherson wrote:
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 5cedc0e8a5d5..4c5ba4128b38 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -395,9 +395,10 @@ static inline void kvm_unregister_perf_callbacks(void)
>
On Thu, Aug 26, 2021 at 05:57:14PM -0700, Sean Christopherson wrote:
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 13c4f58a75e5..e0b1c9386926 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -5498,6 +5498,7 @@ void kvm_set_intel_pt_intr_handler(void
> (*handle
Hi Stefano,
On 27/08/2021 00:24, Stefano Stabellini wrote:
On Wed, 11 Aug 2021, Wei Chen wrote:
EFI can get memory map from EFI system table. But EFI system
table doesn't contain memory NUMA information, EFI depends on
ACPI SRAT or device tree memory node to parse memory blocks'
NUMA mapping.
On Fri, Aug 27, 2021 at 02:52:25PM +0800, Like Xu wrote:
> + STATIC BRANCH/CALL friends.
>
> On 27/8/2021 8:57 am, Sean Christopherson wrote:
> > This started out as a small series[1] to fix a KVM bug related to Intel PT
> > interrupt handling and snowballed horribly.
> >
> > The main problem bei
On 27/8/2021 3:44 pm, Peter Zijlstra wrote:
On Fri, Aug 27, 2021 at 02:52:25PM +0800, Like Xu wrote:
+ STATIC BRANCH/CALL friends.
On 27/8/2021 8:57 am, Sean Christopherson wrote:
This started out as a small series[1] to fix a KVM bug related to Intel PT
interrupt handling and snowballed horri
Relevant quotes from the C11 standard:
"Except where explicitly stated otherwise, for the purposes of this
subclause unnamed members of objects of structure and union type do not
participate in initialization. Unnamed members of structure objects
have indeterminate value even after initializati
On 27/08/2021 09:21, Jan Beulich wrote:
> Relevant quotes from the C11 standard:
>
> "Except where explicitly stated otherwise, for the purposes of this
> subclause unnamed members of objects of structure and union type do not
> participate in initialization. Unnamed members of structure objects
flight 164498 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/164498/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-armhf-libvirt 6 libvirt-buildfail REGR. vs. 151777
build-amd64-libvirt
Hello,
ballooning down Dom0 by about 16G in one go once in a while causes:
BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0 stuck for 64s!
Showing busy workqueues and worker pools:
workqueue events: flags=0x0
pwq 12: cpus=6 node=0 flags=0x0 nice=0 active=2/256 refcnt=3
in-flight:
Hi Stefano,
> -Original Message-
> From: Stefano Stabellini
> Sent: 2021年8月27日 7:25
> To: Wei Chen
> Cc: xen-devel@lists.xenproject.org; sstabell...@kernel.org; jul...@xen.org;
> jbeul...@suse.com; Bertrand Marquis
> Subject: Re: [XEN RFC PATCH 18/40] xen/arm: Keep memory nodes in dtb f
On 26.08.2021 23:00, Julien Grall wrote:
> Digging down, Linux will set smp_num_siblings to 0 (via
> detect_ht_early()) and as a result will skip all the CPUs. The value is
> retrieve from a CPUID leaf. So it sounds like we don't set the leaft
> correctly.
Xen leaves leaf 1 EBX[23:16] untouched
On 27.08.21 11:01, Jan Beulich wrote:
Hello,
ballooning down Dom0 by about 16G in one go once in a while causes:
BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0 stuck for 64s!
Showing busy workqueues and worker pools:
workqueue events: flags=0x0
pwq 12: cpus=6 node=0 flags=0x0 ni
Hi Stefano,
> -Original Message-
> From: Stefano Stabellini
> Sent: 2021年8月27日 7:52
> To: Wei Chen
> Cc: xen-devel@lists.xenproject.org; sstabell...@kernel.org; jul...@xen.org;
> jbeul...@suse.com; Bertrand Marquis
> Subject: Re: [XEN RFC PATCH 20/40] xen/arm: implement node distance
>
Hi Stefano,
> -Original Message-
> From: Stefano Stabellini
> Sent: 2021年8月27日 8:06
> To: Wei Chen
> Cc: xen-devel@lists.xenproject.org; sstabell...@kernel.org; jul...@xen.org;
> jbeul...@suse.com; Bertrand Marquis
> Subject: Re: [XEN RFC PATCH 22/40] xen/arm: introduce a helper to pars
Hi Jan,
> -Original Message-
> From: Jan Beulich
> Sent: 2021年8月27日 14:18
> To: Stefano Stabellini ; Wei Chen
>
> Cc: xen-devel@lists.xenproject.org; jul...@xen.org; Bertrand Marquis
>
> Subject: Re: [XEN RFC PATCH 16/40] xen/arm: Create a fake NUMA node to use
> common code
>
> On 27.
On 27.08.2021 11:29, Juergen Gross wrote:
> On 27.08.21 11:01, Jan Beulich wrote:
>> ballooning down Dom0 by about 16G in one go once in a while causes:
>>
>> BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0 stuck for 64s!
>> Showing busy workqueues and worker pools:
>> workqueue events:
On 27.08.21 11:44, Jan Beulich wrote:
On 27.08.2021 11:29, Juergen Gross wrote:
On 27.08.21 11:01, Jan Beulich wrote:
ballooning down Dom0 by about 16G in one go once in a while causes:
BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0 stuck for 64s!
Showing busy workqueues and worke
Hi Jan,
On 27/08/2021 07:28, Jan Beulich wrote:
On 27.08.2021 01:42, Andrew Cooper wrote:
On 26/08/2021 22:00, Julien Grall wrote:
Hi Andrew,
While doing more testing today, I noticed that only one vCPU would be
brought up with HVM guest with Xen 4.16 on my setup (QEMU):
[ 1.122180]
=
On Fri, Aug 27, 2021 at 04:01:45PM +0800, Like Xu wrote:
> On 27/8/2021 3:44 pm, Peter Zijlstra wrote:
> > You just have to make sure all static_call() invocations that started
> > before unreg are finished before continuing with the unload.
> > synchronize_rcu() can help with that.
>
> Do you me
On 27.08.2021 12:35, Julien Grall wrote:
> Hi Jan,
>
> On 27/08/2021 07:28, Jan Beulich wrote:
>> On 27.08.2021 01:42, Andrew Cooper wrote:
>>> On 26/08/2021 22:00, Julien Grall wrote:
Hi Andrew,
While doing more testing today, I noticed that only one vCPU would be
brought up w
Hi Jan,
On 27/08/2021 11:52, Jan Beulich wrote:
On 27.08.2021 12:35, Julien Grall wrote:
Hi Jan,
On 27/08/2021 07:28, Jan Beulich wrote:
On 27.08.2021 01:42, Andrew Cooper wrote:
On 26/08/2021 22:00, Julien Grall wrote:
Hi Andrew,
While doing more testing today, I noticed that only one vCP
Hi Jan,
On 27/08/2021 10:26, Jan Beulich wrote:
On 26.08.2021 23:00, Julien Grall wrote:
Digging down, Linux will set smp_num_siblings to 0 (via
detect_ht_early()) and as a result will skip all the CPUs. The value is
retrieve from a CPUID leaf. So it sounds like we don't set the leaft
correctly
Hi Marek,
On 26/08/2021 23:51, Marek Marczykowski-Górecki wrote:
On Thu, Aug 26, 2021 at 10:00:58PM +0100, Julien Grall wrote:
While doing more testing today, I noticed that only one vCPU would be
brought up with HVM guest with Xen 4.16 on my setup (QEMU):
[1.122180]
=
flight 164496 xen-4.13-testing real [real]
flight 164516 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/164496/
http://logs.test-lab.xenproject.org/osstest/logs/164516/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could
On 17.08.21 10:52, Jiamei Xie wrote:
Hi Oleksandr,
I am sorry to resend it because the first one is wrong format.
Hi Jiamei
Sorry for the late response.
-Original Message-
From: Wei Chen
Sent: 2021年8月17日 15:11
To: Oleksandr Tyshchenko ; xen-devel@lists.xenproject.org
Cc: Oleksa
Today the Xen ballooning is done via delayed work in a workqueue. This
might result in workqueue hangups being reported in case of large
amounts of memory are being ballooned in one go (here 16GB):
BUG: workqueue lockup - pool cpus=6 node=0 flags=0x0 nice=0 stuck for 64s!
Showing busy workqueues a
On Thu, Aug 19, 2021 at 05:23:26PM +0100, Ian Jackson wrote:
> Anthony PERARD writes ("Re: preparations for 4.15.1 and 4.13.4"):
> > Can we backport support of QEMU 6.0 to Xen 4.15? I'm pretty sure
> > distributions are going to want to use the latest QEMU and latest Xen,
> > without needed to buil
Hi,
[This conversation started on the xen-security-issues-discuss list
as I mistakenly thought it was to do with then-embargoed XSA
patches]
I did "xl save" on 17 domUs that were running under dom0 kernel
4.19.0-16-amd64 (4.19.181-1), hypervisor 4.14.2. I then rebooted
dom0 into kernel 5.10.0-0.b
On 27/08/2021 10:26, Jan Beulich wrote:
> On 26.08.2021 23:00, Julien Grall wrote:
>> Digging down, Linux will set smp_num_siblings to 0 (via
>> detect_ht_early()) and as a result will skip all the CPUs. The value is
>> retrieve from a CPUID leaf. So it sounds like we don't set the leaft
>> corr
flight 164515 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/164515/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-libvirt 15 migrate-support-checkfail never pass
test-arm64-arm64-xl-xsm 1
On 27.08.2021 08:52, osstest service owner wrote:
> flight 164495 xen-4.15-testing real [real]
> flight 164509 xen-4.15-testing real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/164495/
> http://logs.test-lab.xenproject.org/osstest/logs/164509/
>
> Regressions :-(
>
> Tests wh
On 8/25/21 11:22 AM, Jan Beulich wrote:
> On 05.08.2021 16:06, Daniel P. Smith wrote:
>> On Linux when SELinux is put into permissive mode the descretionary access
>> controls are still in place. Whereas for Xen when the enforcing state of
>> flask
>> is set to permissive, all operations for all d
On 8/25/21 11:16 AM, Jan Beulich wrote:
> On 05.08.2021 16:06, Daniel P. Smith wrote:
>> @@ -747,16 +747,16 @@ extern int xsm_dt_policy_init(void **policy_buffer,
>> size_t *policy_size);
>> extern bool has_xsm_magic(paddr_t);
>> #endif
>>
>> -extern int register_xsm(struct xsm_operations *ops
As explained in the comments, a progress label wants to be before the function
it refers to for the higher level logic to make sense. As it happens, the
effects are benign because gnttab_mappings is immediately adjacent to teardown
in terms of co-routine exit points.
There is and will always be a
On 8/26/21 4:13 AM, Jan Beulich wrote:
> On 05.08.2021 16:06, Daniel P. Smith wrote:
>> --- /dev/null
>> +++ b/xen/include/xsm/xsm-core.h
>> @@ -0,0 +1,273 @@
>> +/*
>> + * This file contains the XSM hook definitions for Xen.
>> + *
>> + * This work is based on the LSM implementation in Linux 2.6
On 27.08.2021 16:01, Andrew Cooper wrote:
> As explained in the comments, a progress label wants to be before the function
> it refers to for the higher level logic to make sense. As it happens, the
> effects are benign because gnttab_mappings is immediately adjacent to teardown
> in terms of co-r
Hi Wei,
On 11/08/2021 11:24, Wei Chen wrote:
The code in acpi_scan_nodes can be reused for device tree based
NUMA. So we rename acpi_scan_nodes to numa_scan_nodes for a neutral
function name. As acpi_numa variable is available in ACPU based NUMA
system only, we use CONFIG_ACPI_NUMA to protect it
On 27/08/2021 15:07, Jan Beulich wrote:
> On 27.08.2021 16:01, Andrew Cooper wrote:
>> As explained in the comments, a progress label wants to be before the
>> function
>> it refers to for the higher level logic to make sense. As it happens, the
>> effects are benign because gnttab_mappings is im
Hi Wei,
On 11/08/2021 11:24, Wei Chen wrote:
diff --git a/xen/include/asm-x86/acpi.h b/xen/include/asm-x86/acpi.h
index 33b71dfb3b..2140461ff3 100644
--- a/xen/include/asm-x86/acpi.h
+++ b/xen/include/asm-x86/acpi.h
@@ -101,9 +101,6 @@ extern unsigned long acpi_wakeup_address;
#define ARCH_
On 8/25/21 11:44 AM, Jan Beulich wrote:
> On 05.08.2021 16:06, Daniel P. Smith wrote:
>> The internal define flag is not used by any XSM module, removing the #ifdef
>> leaving the generic event channel labeling as always present.
>
> With this description ...
>
>> --- a/xen/include/xen/sched.h
>>
Hi Wei,
On 11/08/2021 11:24, Wei Chen wrote:
Now, we can use the same function for ACPI and device tree based
NUMA to scan memory nodes.
Signed-off-by: Wei Chen
---
xen/common/numa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xen/common/numa.c b/xen/common/numa.c
in
Hi Wei,
On 11/08/2021 11:24, Wei Chen wrote:
We have not wanted to make Xen guest be NUMA aware in this patch
series.
The concept of patch series ceases to exist once we merge the code. So
about how:
"The NUMA information provided in the host Device-Tree are only for Xen.
For dom0, we wan
Hi,
On 11/08/2021 11:24, Wei Chen wrote:
When Xen initialize NUMA failed, some architectures may need to
do fallback actions. For example, in device tree based NUMA, Arm
need to reset the distance between any two nodes.
From the description here, I don't understand why we need to reset the
di
Hi Wei,
On 11/08/2021 11:24, Wei Chen wrote:
Everything is ready, we can remove the fake NUMA node and
depends on device tree to create NUMA system.
So you just added code a few patches before that are now completely
rewritten. Can you please re-order this series so it doesn't happen?
This
Hi Wei,
On 11/08/2021 11:24, Wei Chen wrote:
Xen x86 has created a command line parameter "numa" as NUMA switch for
user to turn on/off NUMA. As device tree based NUMA has been enabled
for Arm, this parameter can be reused by Arm. So in this patch, we move
this parameter to common.
Signed-off-b
Hi Bertrand,
On 25/08/2021 14:18, Bertrand Marquis wrote:
Import some ID registers definitions from Linux sysreg header to have
required shift definitions for all ID registers fields.
Those are required to reuse the cpufeature sanitization system from
Linux kernel.
Signed-off-by: Bertrand Marq
On 27.08.2021 15:29, Jan Beulich wrote:
> On 27.08.2021 08:52, osstest service owner wrote:
>> flight 164495 xen-4.15-testing real [real]
>> flight 164509 xen-4.15-testing real-retest [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/164495/
>> http://logs.test-lab.xenproject.org/osstest/l
On Fri, Aug 27, 2021, Peter Zijlstra wrote:
> On Thu, Aug 26, 2021 at 05:57:08PM -0700, Sean Christopherson wrote:
> > Use a per-CPU pointer to track perf's guest callbacks so that KVM can set
> > the callbacks more precisely and avoid a lurking NULL pointer dereference.
>
> I'm completely failing
On Fri, Aug 27, 2021 at 02:49:50PM +, Sean Christopherson wrote:
> On Fri, Aug 27, 2021, Peter Zijlstra wrote:
> > On Thu, Aug 26, 2021 at 05:57:08PM -0700, Sean Christopherson wrote:
> > > Use a per-CPU pointer to track perf's guest callbacks so that KVM can set
> > > the callbacks more precis
On Fri, Aug 27, 2021, Peter Zijlstra wrote:
> On Thu, Aug 26, 2021 at 05:57:10PM -0700, Sean Christopherson wrote:
> > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> > index 5cedc0e8a5d5..4c5ba4128b38 100644
> > --- a/arch/x86/kvm/x86.h
> > +++ b/arch/x86/kvm/x86.h
> > @@ -395,9 +395,10 @@
Hi Bertrand,
On 25/08/2021 14:18, Bertrand Marquis wrote:
Sanitize CTR_EL0 value between cores.
In most cases different values will taint Xen but if different
i-cache policies are found, we choose the one which will be compatible
between all cores in terms of invalidation/data cache flushing st
On Fri, Aug 27, 2021, Peter Zijlstra wrote:
> On Fri, Aug 27, 2021 at 02:49:50PM +, Sean Christopherson wrote:
> > On Fri, Aug 27, 2021, Peter Zijlstra wrote:
> > > On Thu, Aug 26, 2021 at 05:57:08PM -0700, Sean Christopherson wrote:
> > > > Use a per-CPU pointer to track perf's guest callbacks
Thanks everyone for your input and your support. The consensus
seems to be that, despite this release being a little thin, we should
keep to the usual cadence. There were multiple requests to slip by
about a week, which for this release seems a reasonable risk to take.
I'm hoping that we can achi
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory directly
From: Tianyu Lan
Hyperv exposes GHCB page via SEV ES GHCB MSR for SNP guest
to communicate with hypervisor. Map GHCB page for all
cpus to read/write MSR register and submit hvcall request
via ghcb page.
Signed-off-by: Tianyu Lan
---
Chagne since v3:
* Rename ghcb_base to hv_ghcb_pg and
From: Tianyu Lan
Hyper-V exposes shared memory boundary via cpuid
HYPERV_CPUID_ISOLATION_CONFIG and store it in the
shared_gpa_boundary of ms_hyperv struct. This prepares
to share memory with host for SNP guest.
Signed-off-by: Tianyu Lan
---
Change since v3:
* user BIT_ULL to get shared
From: Tianyu Lan
Add new hvcall guest address host visibility support to mark
memory visible to host. Call it inside set_memory_decrypted
/encrypted(). Add HYPERVISOR feature check in the
hv_is_isolation_supported() to optimize in non-virtualization
environment.
Acked-by: Dave Hansen
Signed-off
From: Tianyu Lan
Mark vmbus ring buffer visible with set_memory_decrypted() when
establish gpadl handle.
Signed-off-by: Tianyu Lan
---
Change since v3:
* Change vmbus_teardown_gpadl() parameter and put gpadl handle,
buffer and buffer size in the struct vmbus_gpadl.
---
drivers/hv
From: Tianyu Lan
Hyperv provides GHCB protocol to write Synthetic Interrupt
Controller MSR registers in Isolation VM with AMD SEV SNP
and these registers are emulated by hypervisor directly.
Hyperv requires to write SINTx MSR registers twice. First
writes MSR via GHCB page to communicate with hyp
From: Tianyu Lan
hyperv provides ghcb hvcall to handle VMBus
HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE
msg in SNP Isolation VM. Add such support.
Signed-off-by: Tianyu Lan
---
Change since v3:
* Add hv_ghcb_hypercall() stub function to avoid
compile error for ARM.
---
arch/
From: Tianyu Lan
The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
with host in Isolation VM and so it's necessary to use hvcall to set
them visible to host. In Isolation VM with AMD SEV SNP, the access
address should be in the extra space which is above shared gpa
boundary. So
From: Tianyu Lan
VMbus ring buffer are shared with host and it's need to
be accessed via extra address space of Isolation VM with
AMD SNP support. This patch is to map the ring buffer
address in extra address space via vmap_pfn(). Hyperv set
memory host visibility hvcall smears data in the ring b
From: Tianyu Lan
In Hyper-V Isolation VM with AMD SEV, swiotlb boucne buffer
needs to be mapped into address space above vTOM and so
introduce dma_map_decrypted/dma_unmap_encrypted() to map/unmap
bounce buffer memory. The platform can populate man/unmap callback
in the dma memory decrypted ops. T
flight 164524 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/164524/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-libvirt 15 migrate-support-checkfail never pass
test-arm64-arm64-xl-xsm 1
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which is above
On Fri, Aug 06, 2021, Zhu Lingshan wrote:
> @@ -2944,18 +2966,21 @@ static unsigned long code_segment_base(struct pt_regs
> *regs)
>
> unsigned long perf_instruction_pointer(struct pt_regs *regs)
> {
> - if (perf_guest_cbs && perf_guest_cbs->is_in_guest())
> - return perf_guest
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(dma_map_sg) to map
these m
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary
(E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
The access physical address will be original physical address +
shared_gpa_boundary. T
On Fri, Aug 27, 2021 at 01:21:02PM -0400, Tianyu Lan wrote:
> From: Tianyu Lan
>
> Mark vmbus ring buffer visible with set_memory_decrypted() when
> establish gpadl handle.
>
> Signed-off-by: Tianyu Lan
> ---
> Change since v3:
>* Change vmbus_teardown_gpadl() parameter and put gpadl ha
On Fri, Aug 27, 2021 at 01:21:03PM -0400, Tianyu Lan wrote:
> From: Tianyu Lan
>
> Hyperv provides GHCB protocol to write Synthetic Interrupt
> Controller MSR registers in Isolation VM with AMD SEV SNP
> and these registers are emulated by hypervisor directly.
> Hyperv requires to write SINTx MSR
Hi Greg:
Thanks for your review.
On 8/28/2021 1:41 AM, Greg KH wrote:
On Fri, Aug 27, 2021 at 01:21:02PM -0400, Tianyu Lan wrote:
From: Tianyu Lan
Mark vmbus ring buffer visible with set_memory_decrypted() when
establish gpadl handle.
Signed-off-by: Tianyu Lan
---
Change since v3:
On 8/28/2021 1:41 AM, Greg KH wrote:
On Fri, Aug 27, 2021 at 01:21:03PM -0400, Tianyu Lan wrote:
From: Tianyu Lan
Hyperv provides GHCB protocol to write Synthetic Interrupt
Controller MSR registers in Isolation VM with AMD SEV SNP
and these registers are emulated by hypervisor directly.
Hyperv
flight 164497 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/164497/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 164457
test-armhf-armhf-libvirt 16 sav
This will make it easier to share common error paths.
Signed-off-by: Luis Chamberlain
---
drivers/nvdimm/btt.c | 19 ---
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 3fd1bdb9fc05..275704d80109 100644
--- a/driver
If nd_integrity_init() fails we'd get del_gendisk() called,
but that's not correct as we should only call that if we're
done with device_add_disk(). Fix this by providing unwinding
prior to the devm call being registered and moving the devm
registration to the very end.
This should fix calling del
work [0]
[0]
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux-next.git/log/?h=20210827-for-axboe-add-disk-error-handling-next-2nd
Luis Chamberlain (10):
block/brd: add error handling support for add_disk()
bcache: add error handling support for add_disk()
nvme-multipath: add error
We know we don't need del_gendisk() if we haven't added
the disk, so just skip it. This should fix a bug on older
kernels, as del_gendisk() became able to deal with
disks not added only recently, after the patch titled
"block: add flag for add_disk() completion notation".
Signed-off-by: Luis Chamb
Botched the subject. Sorry. this is the *second* batch :)
Luis
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.
Signed-off-by: Luis Chamberlain
---
drivers/block/brd.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/block/brd.c b/drivers
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.
This driver doesn't do any unwinding with blk_cleanup_disk()
even on errors after add_disk() and so we follow that
tradition.
Signed-off-by: Luis Chamberlain
---
d
We never checked for errors on device_add_disk() as this function
returned void. Now that this is fixed, use the shiny new error
handling. The function xlvbd_alloc_gendisk() typically does the
unwinding on error on allocating the disk and creating the tag,
but since all that error handling was stuf
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.
Signed-off-by: Luis Chamberlain
---
drivers/block/zram/zram_drv.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/block/zram/zram_d
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.
Signed-off-by: Luis Chamberlain
---
drivers/nvdimm/btt.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdim
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.
Since we now can tell for sure when a disk was added, move
setting the bit NVME_NSHEAD_DISK_LIVE only when we did
add the disk successfully.
Nothing to do here as th
We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.
Since nvdimm/blk uses devm we just need to move the devm
registration towards the end. And in hindsight, that seems
to also provide a fix given del_gendisk() should n
On 17.08.21 20:54, Julien Grall wrote:
Hi Julien
On 17/08/2021 18:53, Julien Grall wrote:
Hi Oleksandr,
On 10/08/2021 18:03, Oleksandr wrote:
On 10.08.21 19:28, Julien Grall wrote:
Hi Julien.
On 09/08/2021 22:18, Oleksandr wrote:
On 09.08.21 23:45, Julien Grall wrote:
Hi Julien
flight 164525 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/164525/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-libvirt 15 migrate-support-checkfail never pass
test-arm64-arm64-xl-xsm 1
On Fri, 27 Aug 2021, Julien Grall wrote:
> Hi Stefano,
>
> On 27/08/2021 00:24, Stefano Stabellini wrote:
> > On Wed, 11 Aug 2021, Wei Chen wrote:
> > > EFI can get memory map from EFI system table. But EFI system
> > > table doesn't contain memory NUMA information, EFI depends on
> > > ACPI SRAT
This is a combination of ~2 series to fix bugs in the perf+KVM callbacks,
optimize the callbacks by employing static_call, and do a variety of
cleanup in both perf and KVM.
Patch 1 fixes a mostly-theoretical bug where perf can deref a NULL
pointer if KVM unregisters its callbacks while they're bei
Protect perf_guest_cbs with READ_ONCE/WRITE_ONCE to ensure it's not
reloaded between a !NULL check and a dereference, and wait for all
readers via syncrhonize_rcu() to prevent use-after-free, e.g. if the
callbacks are being unregistered during module unload. Because the
callbacks are global, it's
Wait to register perf callbacks until after doing vendor hardaware setup.
VMX's hardware_setup() configures Intel Processor Trace (PT) mode, and a
future fix to register the Intel PT guest interrupt hook if and only if
Intel PT is exposed to the guest will consume the configured PT mode.
Delaying
Override the Processor Trace (PT) interrupt handler for guest mode if and
only if PT is configured for host+guest mode, i.e. is being used
independently by both host and guest. If PT is configured for system
mode, the host fully controls PT and must handle all events.
Fixes: 8479e04e7d6b ("KVM: x
Drop the 'int' return value from the perf (un)register callbacks helpers
and stop pretending perf can support multiple callbacks. The 'int'
returns are not future proofing anything as none of the callers take
action on an error. It's also not obvious that there will ever be
co-tenant hypervisors,
Introduce HAVE_GUEST_PERF_EVENTS and require architectures to select it
to allow registering guest callbacks in perf. Future patches will convert
the callbacks to static_call. Rather than churn a bunch of arch code (that
was presumably copy+pasted from x86), remove it wholesale as it's useless
an
From: Like Xu
To prepare for using static_calls to optimize perf's guest callbacks,
replace ->is_in_guest and ->is_user_mode with a new multiplexed hook
->state, tweak ->handle_intel_pt_intr to play nice with being called when
there is no active guest, and drop "guest" from ->is_in_guest.
Return
From: Like Xu
Use static_call to optimize perf's guest callbacks on arm64 and x86,
which are now the only architectures that define the callbacks. Use
DEFINE_STATIC_CALL_RET0 as the default/NULL for all guest callbacks, as
the callback semantics are that a return value '0' means "not in guest".
Use the generic kvm_running_vcpu plus a new 'handling_intr_from_guest'
variable in kvm_arch_vcpu instead of the semi-redundant current_vcpu.
kvm_before/after_interrupt() must be called while the vCPU is loaded,
(which protects against preemption), thus kvm_running_vcpu is guaranteed
to be non-NULL
1 - 100 of 119 matches
Mail list logo