On Tue, Sep 19, 2023 at 09:23:37AM +0530, Anup Patel wrote:
> The Veyron-V1 CPU supports custom conditional arithmetic and
> conditional-select/move operations referred to as XVentanaCondOps
> extension. In fact, QEMU RISC-V also has support for emulating
> XVentanaCondOps extension.
>
> Let us de
On Mon, Oct 02, 2023 at 09:06:08PM +0530, Anup Patel wrote:
> > extensions?
> >
>
> We already have few T-Head specific extensions so Linux RISC-V
> does allow vendor extensions.
Only for kernel internal operation and to actually boot the
chip. IMHO still the wrong tradeoff, but very different f
Please don't spread scatterlists further. They are a bad data structure
that mix input data (page, offset, len) and output data (phys_addr,
dma_offset, dma_len), and do in a horrible way for iommmu mappings that
can coalesce. Jason and coworkers have been looking into the long
overdue API to bett
On Tue, Dec 12, 2023 at 08:25:35AM -0400, Jason Gunthorpe wrote:
> > +static inline struct page_pool_iov *page_to_page_pool_iov(struct page
> > *page)
> > +{
> > + if (page_is_page_pool_iov(page))
> > + return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
> > +
> > + DEBUG
On Thu, Dec 14, 2023 at 06:20:27AM +, patchwork-bot+netdev...@kernel.org
wrote:
> Hello:
>
> This series was applied to netdev/net-next.git (main)
> by Jakub Kicinski :
Umm, this is still very broken in intraction with other subsystems.
Please don't push ahead so quickly.
On Wed, Dec 13, 2023 at 10:51:25PM -0800, Mina Almasry wrote:
> On Wed, Dec 13, 2023 at 10:49 PM Christoph Hellwig wrote:
> >
> > On Thu, Dec 14, 2023 at 06:20:27AM +,
> > patchwork-bot+netdev...@kernel.org wrote:
> > > Hello:
> > >
> > > This
On Mon, Mar 04, 2024 at 06:01:37PM -0800, Mina Almasry wrote:
> From: Jakub Kicinski
>
> The page providers which try to reuse the same pages will
> need to hold onto the ref, even if page gets released from
> the pool - as in releasing the page from the pp just transfers
> the "ownership" refere
On Sun, Mar 17, 2024 at 07:49:43PM -0700, David Wei wrote:
> I'm working on a similar proposal for zero copy Rx but to host memory
> and depend on this memory provider API.
How do you need a different provider for that vs just udmabuf?
> Jakub also designed this API for hugepages too IIRC. Basica
On Fri, Mar 22, 2024 at 04:19:44PM -0700, Jakub Kicinski wrote:
> On Fri, 22 Mar 2024 10:40:26 -0700 Mina Almasry wrote:
> > Other designs for this hugepage use case are possible, I'm just
> > describing Jakub's idea for it as a potential use-case for these
> > hooks.
>
> I made it ops because I
On Fri, Mar 22, 2024 at 10:54:54AM -0700, Mina Almasry wrote:
> Sorry I don't mean to argue but as David mentioned, there are some
> plans in the works and ones not in the works to extend this to other
> memory types. David mentioned io_uring & Jakub's huge page use cases
> which may want to re-use
On Fri, Mar 22, 2024 at 10:40:26AM -0700, Mina Almasry wrote:
> Hi Christoph,
>
> Sorry for the late reply, I've been out for a few days.
>
> On Mon, Mar 18, 2024 at 4:22 PM Christoph Hellwig wrote:
> >
> > On Sun, Mar 17, 2024 at 07:49:43PM -0700, David W
On Tue, Mar 26, 2024 at 01:19:20PM -0700, Mina Almasry wrote:
>
> Are you envisioning that dmabuf support would be added to the block
> layer
Yes.
> (which I understand is part of the VFS and not driver specific),
The block layer isn't really the VFS, it's just another core stack
like the netwo
On Mon, Jun 10, 2024 at 02:38:18PM +0200, Christian König wrote:
> Well there is the fundamental problem that you can't use io_uring to
> implement the semantics necessary for a dma_fence.
What is the exact problem there?
On Mon, Jun 10, 2024 at 09:16:43AM -0600, David Ahern wrote:
>
> exactly. io_uring, page_pool, dmabuf - all kernel building blocks for
> solutions. This why I was pushing for Mina's set not to be using the
> name `devmem` - it is but one type of memory and with dmabuf it should
> not matter if it
On Fri, Jun 07, 2024 at 02:45:55PM +0100, Pavel Begunkov wrote:
> On 6/5/24 09:24, Christoph Hellwig wrote:
> > On Mon, Jun 03, 2024 at 03:52:32PM +0100, Pavel Begunkov wrote:
> > > The question for Christoph is what exactly is the objection here? Why we
> > > would no
On Thu, Jun 13, 2024 at 10:26:11AM +0530, Nilay Shroff wrote:
> I am wondering whether we really need the _rcu version of list_cut here?
> I think that @head could point to an _rcu protected list and that's true
> for this patch. So there might be concurrent readers accessing @head using
> _rcu li
On Mon, Jun 17, 2024 at 07:04:43PM +0100, Pavel Begunkov wrote:
> > There should be no other memory source other than the page allocator
> > and dmabuf. If you need different life time control for your
> > zero copy proposal don't mix that up with the contol of the memory
> > source.
>
> No idea
On Wed, Jun 19, 2024 at 08:51:35AM -0300, Jason Gunthorpe wrote:
> If you can't agree with the guest_memfd people on how to get there
> then maybe you need a guest_memfd2 for this slightly different special
> stuff instead of intruding on the core mm so much. (though that would
> be sad)
Or we're
Still NAK to creating aⅺbitrary hooks here. This should be a page or
dmabuf pool and not an indirect call abstraction allowing random
crap to hook into it.
On Fri, Apr 26, 2024 at 05:17:52PM -0700, David Wei wrote:
> On 2024-04-02 5:20 pm, Mina Almasry wrote:
> > @@ -69,20 +106,26 @@ net_iov_binding(const struct net_iov *niov)
> > */
> > typedef unsigned long __bitwise netmem_ref;
> >
> > +static inline bool netmem_is_net_iov(const netmem_ref net
On Fri, May 03, 2024 at 01:10:44PM -0700, Mina Almasry wrote:
> Is the concern still that folks may be able to hook proprietary stuff
> into this like you mentioned before[1]?
That is on concern. The other is that people will do stupid stuff
even in tree if you give them enough rope, and they sho
On Tue, May 07, 2024 at 01:18:57PM -0300, Jason Gunthorpe wrote:
> On Tue, May 07, 2024 at 05:05:12PM +0100, Pavel Begunkov wrote:
> > > even in tree if you give them enough rope, and they should not have
> > > that rope when the only sensible options are page/folio based kernel
> > > memory (incud
On Wed, May 08, 2024 at 12:35:52PM +0100, Pavel Begunkov wrote:
> > all these, because e.g. ttm internally does have a page pool because
> > depending upon allocator, that's indeed beneficial. Other drm drivers have
> > more buffer-based concepts for opportunistically memory around, usually
> > by
On Wed, May 08, 2024 at 06:02:14PM +0100, Pavel Begunkov wrote:
> Well, the example fell flat, but you don't use dmabuf when there are
> no upsides from using it. For instance, when you already have pinned
> pages, you're going to use pages, and there are no other refcounting
> concerns.
Sure.
>
On Thu, May 30, 2024 at 08:16:01PM +, Mina Almasry wrote:
> I'm unsure if the discussion has been resolved yet. Sending the series
> anyway to get reviews/feedback on the (unrelated) rest of the series.
As far as I'm concerned it is not. I've not seen any convincing
argument for more than pag
On Mon, Jun 03, 2024 at 07:17:05AM -0700, Mina Almasry wrote:
> On Fri, May 31, 2024 at 10:35 PM Christoph Hellwig wrote:
> >
> > On Thu, May 30, 2024 at 08:16:01PM +, Mina Almasry wrote:
> > > I'm unsure if the discussion has been resolved yet. Sending the serie
On Mon, Jun 03, 2024 at 03:52:32PM +0100, Pavel Begunkov wrote:
> The question for Christoph is what exactly is the objection here? Why we
> would not be using well defined ops when we know there will be more
> users?
The point is that there should be no more users. If you need another
case you a
27 matches
Mail list logo