On Thu, Aug 22, 2024 at 04:39:12PM +0530, Naman Jain wrote:
> Rescind offer handling relies on rescind callbacks for some of the
> resources cleanup, if they are registered. It does not unregister
> vmbus device for the primary channel closure, when callback is
> registered.
> Add logic to unregist
On Thu, Aug 22, 2024 at 04:39:11PM +0530, Naman Jain wrote:
> From: Saurabh Sengar
>
> For primary VMBus channels primary_channel pointer is always NULL. This
> pointer is valid only for the secondry channels.
>
> Fix NULL pointer dereference by retrieving the device_obj from the parent
> in the
On Fri, Aug 23, 2024 at 02:44:29AM -0700, Souradeep Chakrabarti wrote:
> Currently napi_disable() gets called during rxq and txq cleanup,
> even before napi is enabled and hrtimer is initialized. It causes
> kernel panic.
>
> ? page_fault_oops+0x136/0x2b0
> ? page_counter_cancel+0x2e/0x80
> ?
On Wed, Aug 07, 2024 at 07:33:26PM +0200, Thomas Gleixner wrote:
> On Tue, Aug 06 2024 at 15:12, Yunhong Jiang wrote:
> > +static void __init hv_reserve_real_mode(void)
> > +{
> > + phys_addr_t mem;
> > + size_t size = real_mode_size_needed();
> > +
> > + /*
> > +* We only need the memory
For VTL2 hyperv guest with wakeup mailbox in device tree, don't
overwrite wakeup_secondary_cpu_64 so that the acpi_wakeup_cpu will be
used to bring up the APs.
Signed-off-by: Yunhong Jiang
---
arch/x86/hyperv/hv_vtl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86
The VTL2 TDX guest may have no sub-1M memory available, but it needs to
invoke trampoline_start64 to wake up the APs through the wakeup mailbox
mechanism. Set realmode_limit to 4G for the VTL2 TDX guest, so that
reserve_real_mode allocae memory under 4G.
Signed-off-by: Yunhong Jiang
---
arch/x86
For the VTL2 hyperv guest, currently the hv_vtl_init_platform() clears
x86_platform.realmode_reserve/init while the hv_vtl_early_init() sets the
real_mode_header.
Set the real_mode_header together with x86_platform.realmode_reserve/init
in hv_vtl_init_platform(). This is ok because x86_platform.re
Currently the reserve_real_mode() always allocates memory from below 1M
range, although some real mode blob code can execute above 1M.
The VTL2 TDX hyperv guest may have no memory available below 1M, but it
needs to invoke some real mode blob code that can execute above 1M memory.
Instead of prov
Current code maps MMIO devices as shared (decrypted) by default in a
confidential computing VM. However, the wakeup mailbox must be accessed
as private (encrypted) because it's accessed by the OS and the firmware,
both are in the guest's context and encrypted. Set the wakeup mailbox
range as privat
Parse the wakeup mailbox VTL2 TDX guest. Put it to the guest_late_init, so
that it will be invoked before hyperv_init() where the mailbox address is
checked.
Signed-off-by: Yunhong Jiang
---
arch/x86/include/asm/mshyperv.h | 3 +++
arch/x86/kernel/cpu/mshyperv.c | 2 ++
drivers/hv/hv_common.c
When a TDX guest boots with the device tree instead of ACPI, it can
reuse the ACPI multiprocessor wakeup mechanism to wake up application
processors(AP), without introducing a new mechanism from scrach.
In the ACPI spec, two structures are defined to wake up the APs: the
multiprocessor wakeup stru
Add the binding to use mailbox wakeup mechanism to bringup APs.
Signed-off-by: Yunhong Jiang
---
.../devicetree/bindings/x86/wakeup.yaml | 64 +++
1 file changed, 64 insertions(+)
create mode 100644 Documentation/devicetree/bindings/x86/wakeup.yaml
diff --git a/Documentat
In order to support the ACPI mailbox wakeup in device tree, move the MADT
wakeup code out of the acpi directory, so that both ACPI and device tree
can use it.
Signed-off-by: Yunhong Jiang
---
MAINTAINERS| 2 ++
arch/x86/kernel/Makefile | 1 +
arc
This set of patches add ACPI multiprocessor wakeup support to VTL2 TDX VMs
booting with device tree instead of ACPI.
Historically, x86 platforms have booted secondary processors (APs) using
INIT followed by the start up IPI (SIPI) messages. However, TDX VMs
can't use this protocol because this pro
On Wed, Aug 14, 2024 at 04:54:32PM -0700, Yosry Ahmed wrote:
> On Mon, Aug 5, 2024 at 1:12 PM Yosry Ahmed wrote:
> >
> > Use native_read_cr*() helpers to read control registers into vmsa->cr*
> > instead of open-coded assembly.
> >
> > No functional change intended, unless there was a purpose to s
On Wed, Jul 31, 2024 at 09:55:36PM -0700, Saurabh Sengar wrote:
> Currently on a very large system with 1780 CPUs, hv_acpi_init() takes
> around 3 seconds to complete. This is because of sequential synic
> initialization for each CPU performed by hv_synic_init().
>
> Schedule these tasks parallell
On Tue, Aug 06, 2024 at 02:01:55AM +, Michael Kelley wrote:
> From: Wei Liu Sent: Friday, August 2, 2024 4:50 PM
> >
> > On Tue, Jun 11, 2024 at 07:51:48AM -0700, Roman Kisel wrote:
> > >
> > >
> > > On 6/5/2024 7:55 PM, mhkelle...@gmail.com wrote:
> > > > From: Michael Kelley
> > > >
> > >
> -Original Message-
> From: Souradeep Chakrabarti
> Sent: Friday, August 23, 2024 5:44 AM
> To: KY Srinivasan ; Haiyang Zhang
> ; wei@kernel.org; Dexuan Cui
> ; da...@davemloft.net; eduma...@google.com;
> k...@kernel.org; pab...@redhat.com; Long Li ;
> yury.no...@gmail.com; l...@ke
From: Petr Tesařík Sent: Friday, August 23, 2024 1:20 AM
>
> On Thu, 22 Aug 2024 11:37:16 -0700
> mhkelle...@gmail.com wrote:
>
> > From: Michael Kelley
> >
> > In a CoCo VM, all DMA-based I/O must use swiotlb bounce buffers
> > because DMA cannot be done to private (encrypted) portions of VM
>
From: Petr Tesařík Sent: Friday, August 23, 2024 1:03 AM
>
> On Thu, 22 Aug 2024 11:37:13 -0700
> mhkelle...@gmail.com wrote:
>
> > From: Michael Kelley
> >
> > When a DMA map request is for a SGL, each SGL entry results in an
> > independent mapping operation. If the mapping requires a bounce
From: Petr Tesařík Sent: Friday, August 23, 2024 12:41 AM
>
> On Thu, 22 Aug 2024 11:37:12 -0700
> mhkelle...@gmail.com wrote:
>
> > From: Michael Kelley
> >
> > Implement throttling of swiotlb map requests. Because throttling requires
> > temporarily pending some requests, throttling can only
From: Petr Tesařík Sent: Thursday, August 22, 2024 11:45 PM
>
> Hi all,
>
> upfront, I've had more time to consider this idea, because Michael
> kindly shared it with me back in February.
>
> On Thu, 22 Aug 2024 11:37:11 -0700
> mhkelle...@gmail.com wrote:
>
> > From: Michael Kelley
> >
> > B
From: Erni Sri Satya Vennela Sent: Thursday, August
22, 2024 8:06 AM
>
> Change VMBus channels macro (VRSS_CHANNEL_DEFAULT) in
> Linux netvsc from 8 to 16 to align with Azure Windows VM
> and improve networking throughput.
>
> For VMs having less than 16 vCPUS, the channels depend
> on number o
Hello:
This patch was applied to netdev/net.git (main)
by David S. Miller :
On Wed, 21 Aug 2024 13:42:29 -0700 you wrote:
> The mana_hwc_rx_event_handler() / mana_hwc_handle_resp() calls
> complete(&ctx->comp_event) before posting the wqe back. It's
> possible that other callers, like mana_create
On Thu, Aug 22, 2024 at 08:53:09AM -0700, Shradha Gupta wrote:
> Currently the values of WQs for RX and TX queues for MANA devices
> are hardcoded to default sizes.
> Allow configuring these values for MANA devices as ringparam
> configuration(get/set) through ethtool_ops.
> Pre-allocate buffers at
Currently napi_disable() gets called during rxq and txq cleanup,
even before napi is enabled and hrtimer is initialized. It causes
kernel panic.
? page_fault_oops+0x136/0x2b0
? page_counter_cancel+0x2e/0x80
? do_user_addr_fault+0x2f2/0x640
? refill_obj_stock+0xc4/0x110
? exc_page_fault+0x7
On Thu, 22 Aug 2024 11:37:18 -0700
mhkelle...@gmail.com wrote:
> From: Michael Kelley
>
> In a CoCo VM, all DMA-based I/O must use swiotlb bounce buffers
> because DMA cannot be done to private (encrypted) portions of VM
> memory. The bounce buffer memory is marked shared (decrypted) at
> boot t
On Thu, 22 Aug 2024 11:37:17 -0700
mhkelle...@gmail.com wrote:
> From: Michael Kelley
>
> The NVMe setting that controls the BLK_MQ_F_BLOCKING flag on the
> request queue is currently a flag in struct nvme_ctrl_ops, where
> it is not writable. A new use case needs this flag to be writable
> base
On Thu, 22 Aug 2024 11:37:16 -0700
mhkelle...@gmail.com wrote:
> From: Michael Kelley
>
> In a CoCo VM, all DMA-based I/O must use swiotlb bounce buffers
> because DMA cannot be done to private (encrypted) portions of VM
> memory. The bounce buffer memory is marked shared (decrypted) at
> boot t
On Thu, 22 Aug 2024 11:37:15 -0700
mhkelle...@gmail.com wrote:
> From: Michael Kelley
>
> Extend the SCSI DMA mapping interfaces by adding the "_attrs" variant
> of scsi_dma_map(). This variant allows passing DMA_ATTR_* values, such
> as is needed to support swiotlb throttling. The existing scsi
On Thu, 22 Aug 2024 11:37:14 -0700
mhkelle...@gmail.com wrote:
> From: Michael Kelley
>
> With the addition of swiotlb throttling functionality, storage
> device drivers may want to know whether using the DMA_ATTR_MAY_BLOCK
> attribute is useful. In a CoCo VM or environment where swiotlb=force
>
On Thu, 22 Aug 2024 11:37:13 -0700
mhkelle...@gmail.com wrote:
> From: Michael Kelley
>
> When a DMA map request is for a SGL, each SGL entry results in an
> independent mapping operation. If the mapping requires a bounce buffer
> due to running in a CoCo VM or due to swiotlb=force on the boot l
On Thu, 22 Aug 2024 11:37:12 -0700
mhkelle...@gmail.com wrote:
> From: Michael Kelley
>
> Implement throttling of swiotlb map requests. Because throttling requires
> temporarily pending some requests, throttling can only be used by map
> requests made in contexts that can block. Detecting such c
33 matches
Mail list logo