Hi,
You may have already found out that there's a problem using
pci_alloc_consistent and friends in the USB layer which will
only be obvious on CPUs where they need to do page table remapping
- that is that pci_alloc_consistent/pci_free_consistent aren't
guaranteed to be interrupt-safe.
I'm not
On Sat, Jan 20, 2001, Russell King <[EMAIL PROTECTED]> wrote:
> Johannes Erdfelt writes:
> > They need to be visible via DMA. They need to be 16 byte aligned. We
> > also have QH's which have similar requirements, but we don't use as many
> > of them.
>
> Can we get away from the "16 byte aligned
Russell King wrote:
>
> Manfred Spraul writes:
> > Not yet, but that would be a 2 line patch (currently it's hardcoded to
> > BYTES_PER_WORD align or L1_CACHE_BYTES, depending on the HWCACHE_ALIGN
> > flag).
>
> I don't think there's a problem then. However, if slab can be told "I want
> 1024 b
Manfred Spraul writes:
> Not yet, but that would be a 2 line patch (currently it's hardcoded to
> BYTES_PER_WORD align or L1_CACHE_BYTES, depending on the HWCACHE_ALIGN
> flag).
I don't think there's a problem then. However, if slab can be told "I want
1024 bytes aligned to 1024 bytes" then I ca
Russell King wrote:
>
> Johannes Erdfelt writes:
> > They need to be visible via DMA. They need to be 16 byte aligned. We
> > also have QH's which have similar requirements, but we don't use as many
> > of them.
>
> Can we get away from the "16 byte aligned" and make it "n byte aligned"?
> I bel
Johannes Erdfelt writes:
> They need to be visible via DMA. They need to be 16 byte aligned. We
> also have QH's which have similar requirements, but we don't use as many
> of them.
Can we get away from the "16 byte aligned" and make it "n byte aligned"?
I believe that slab already has support fo
On Sat, Jan 20, 2001, Manfred Spraul <[EMAIL PROTECTED]> wrote:
> >
> > TD's are around 32 bytes big (actually, they may be 48 or even 64 now, I
> > haven't checked recently). That's a waste of space for an entire page.
> >
> > However, having every driver implement it's own slab cache seems a
>
> TD's are around 32 bytes big (actually, they may be 48 or even 64 now, I
> haven't checked recently). That's a waste of space for an entire page.
>
> However, having every driver implement it's own slab cache seems a
> complete waste of time when we already have the code to do so in
> mm
On Sat, Jan 20, 2001, Russell King <[EMAIL PROTECTED]> wrote:
> Johannes Erdfelt writes:
> > On Fri, Jan 19, 2001, Miles Lane <[EMAIL PROTECTED]> wrote:
> > > Johannes Erdfelt wrote:
> > >
> > > > TODO
> > > >
> > > > - The PCI DMA architecture is horribly inefficient on x86 and ia64. The
>
Johannes Erdfelt writes:
> On Fri, Jan 19, 2001, Miles Lane <[EMAIL PROTECTED]> wrote:
> > Johannes Erdfelt wrote:
> >
> > > TODO
> > >
> > > - The PCI DMA architecture is horribly inefficient on x86 and ia64. The
> > > result is a page is allocated for each TD. This is evil. Perhaps a sla
On Fri, Jan 19, 2001, Miles Lane <[EMAIL PROTECTED]> wrote:
> Johannes Erdfelt wrote:
>
> > TODO
> >
> > - The PCI DMA architecture is horribly inefficient on x86 and ia64. The
> > result is a page is allocated for each TD. This is evil. Perhaps a slab
> > cache internally? Or modify the
11 matches
Mail list logo