Hi all,
upfront, I've had more time to consider this idea, because Michael
kindly shared it with me back in February.
On Thu, 22 Aug 2024 11:37:11 -0700
mhkelle...@gmail.com wrote:
> From: Michael Kelley
>
> Background
> ==
> Linux device drivers may make DMA map/unmap calls in context
On Fri, 23 Aug 2024 02:20:41 +
Michael Kelley wrote:
> From: Bart Van Assche Sent: Thursday, August 22, 2024
> 12:29 PM
> >
> > On 8/22/24 11:37 AM, mhkelle...@gmail.com wrote:
> > > Linux device drivers may make DMA map/unmap calls in contexts that
> > > cannot block, such as in an inte
From: Bart Van Assche Sent: Thursday, August 22, 2024
12:29 PM
>
> On 8/22/24 11:37 AM, mhkelle...@gmail.com wrote:
> > Linux device drivers may make DMA map/unmap calls in contexts that
> > cannot block, such as in an interrupt handler.
>
> Although I really appreciate your work, what alternat
On 8/22/24 11:37 AM, mhkelle...@gmail.com wrote:
Linux device drivers may make DMA map/unmap calls in contexts that
cannot block, such as in an interrupt handler.
Although I really appreciate your work, what alternatives have been
considered? How many drivers perform DMA mapping from atomic con
From: Michael Kelley
In a CoCo VM, all DMA-based I/O must use swiotlb bounce buffers
because DMA cannot be done to private (encrypted) portions of VM
memory. The bounce buffer memory is marked shared (decrypted) at
boot time, so I/O is done to/from the bounce buffer memory and then
copied by the
From: Michael Kelley
The NVMe setting that controls the BLK_MQ_F_BLOCKING flag on the
request queue is currently a flag in struct nvme_ctrl_ops, where
it is not writable. A new use case needs this flag to be writable
based on a determination made during the NVMe device probe function.
Move this
From: Michael Kelley
In a CoCo VM, all DMA-based I/O must use swiotlb bounce buffers
because DMA cannot be done to private (encrypted) portions of VM
memory. The bounce buffer memory is marked shared (decrypted) at
boot time, so I/O is done to/from the bounce buffer memory and then
copied by the
From: Michael Kelley
Extend the SCSI DMA mapping interfaces by adding the "_attrs" variant
of scsi_dma_map(). This variant allows passing DMA_ATTR_* values, such
as is needed to support swiotlb throttling. The existing scsi_dma_map()
interface is unchanged, so no incompatibilities are introduced.
From: Michael Kelley
With the addition of swiotlb throttling functionality, storage
device drivers may want to know whether using the DMA_ATTR_MAY_BLOCK
attribute is useful. In a CoCo VM or environment where swiotlb=force
is used, the MAY_BLOCK attribute enables swiotlb throttling. But if
throttl
From: Michael Kelley
When a DMA map request is for a SGL, each SGL entry results in an
independent mapping operation. If the mapping requires a bounce buffer
due to running in a CoCo VM or due to swiotlb=force on the boot line,
swiotlb is invoked. If swiotlb throttling is enabled for the request,
From: Michael Kelley
Implement throttling of swiotlb map requests. Because throttling requires
temporarily pending some requests, throttling can only be used by map
requests made in contexts that can block. Detecting such contexts at
runtime is infeasible, so device driver code must be updated to
From: Michael Kelley
Background
==
Linux device drivers may make DMA map/unmap calls in contexts that
cannot block, such as in an interrupt handler. Consequently, when a
DMA map call must use a bounce buffer, the allocation of swiotlb
memory must always succeed immediately. If swiotlb mem
Currently the values of WQs for RX and TX queues for MANA devices
are hardcoded to default sizes.
Allow configuring these values for MANA devices as ringparam
configuration(get/set) through ethtool_ops.
Pre-allocate buffers at the beginning of this operation, to
prevent complete network loss in low
On Thu, 22 Aug 2024 05:51:54 + Mina Almasry wrote:
> When net devices propagate xdp configurations to slave devices,
> we will need to perform a memory provider check to ensure we're
> not binding xdp to a device using unreadable netmem.
>
> Currently the ->ndo_bpf calls in a few places. Addin
Change VMBus channels macro (VRSS_CHANNEL_DEFAULT) in
Linux netvsc from 8 to 16 to align with Azure Windows VM
and improve networking throughput.
For VMs having less than 16 vCPUS, the channels depend
on number of vCPUs. Between 16 to 64 vCPUs, the channels
default to VRSS_CHANNEL_DEFAULT. For gre
> -Original Message-
> From: Christophe JAILLET
> Sent: Thursday, August 22, 2024 1:34 AM
> To: Haiyang Zhang
> Cc: a...@kernel.org; b...@vger.kernel.org; dan...@iogearbox.net;
> da...@davemloft.net; Dexuan Cui ;
> eduma...@google.com; h...@kernel.org; jesse.brandeb...@intel.com;
> john
From: Saurabh Sengar
For primary VMBus channels primary_channel pointer is always NULL. This
pointer is valid only for the secondry channels.
Fix NULL pointer dereference by retrieving the device_obj from the parent
in the absence of a valid primary_channel pointer.
Fixes: ca3cda6fcf1e ("uio_hv
Rescind offer handling relies on rescind callbacks for some of the
resources cleanup, if they are registered. It does not unregister
vmbus device for the primary channel closure, when callback is
registered.
Add logic to unregister vmbus for the primary channel in rescind callback
to ensure channel
Fix a few issues in rescind handling in uio_hv_generic driver.
Patches are based on linux-next-rc4 tip.
Steps to reproduce issue:
* Probe uio_hv_generic driver and create channels to use fcopy
* Disable the guest service on host and then Enable it.
or
* repeatedly do cat "/dev/uioX" on the device
19 matches
Mail list logo