On Wednesday 13 June 2007 20:40:11 Matt Mackall wrote:
> On Mon, Jun 11, 2007 at 06:55:46PM -0700, Arjan van de Ven wrote:
> > Andrew Morton wrote:
> > >On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven
> > ><[EMAIL PROTECTED]> wrote:
> > >
> > >>Andrew Morton wrote:
> > Where as resource po
On Mon, Jun 11, 2007 at 06:55:46PM -0700, Arjan van de Ven wrote:
> Andrew Morton wrote:
> >On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven
> ><[EMAIL PROTECTED]> wrote:
> >
> >>Andrew Morton wrote:
> Where as resource pool is exactly opposite of mempool, where each
> time it looks fo
On Mon, Jun 11, 2007 at 06:55:46PM -0700, Arjan van de Ven wrote:
> Andrew Morton wrote:
> >On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven
> ><[EMAIL PROTECTED]> wrote:
> >
> >>Andrew Morton wrote:
> Where as resource pool is exactly opposite of mempool, where each
> time it looks fo
Andrew Morton wrote:
On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
Where as resource pool is exactly opposite of mempool, where each
time it looks for an object in the pool and if it exist then we
return that object else we try to get the
On Mon, 11 Jun 2007 18:10:40 -0700 Arjan van de Ven <[EMAIL PROTECTED]> wrote:
> Andrew Morton wrote:
> >> Where as resource pool is exactly opposite of mempool, where each
> >> time it looks for an object in the pool and if it exist then we
> >> return that object else we try to get the memory
On Mon, 11 Jun 2007, Arjan van de Ven wrote:
> the problem with that is that if anything downstream from the iommu layer ALSO
> needs memory, we've now eaten up the last free page and things go splat.
Hmmm... We need something like a reservation system right? Higher levels
in a atomic context co
Andrew Morton wrote:
Where as resource pool is exactly opposite of mempool, where each
time it looks for an object in the pool and if it exist then we
return that object else we try to get the memory for OS while
scheduling the work to grow the pool objects. In fact, the work
is schedule to g
On Mon, 11 Jun 2007, Keshavamurthy, Anil S wrote:
> slab allocators don;t reserve the memory, in other words this memory
> can be consumed by VM under memory pressure which we don;t want in
> IOMMU case.
So mempools
> Nope,they both are exactly opposite.
> mempool with GFP_ATOMIC, first tr
On Mon, 11 Jun 2007 16:52:08 -0700 "Keshavamurthy, Anil S" <[EMAIL PROTECTED]>
wrote:
> On Mon, Jun 11, 2007 at 02:14:49PM -0700, Andrew Morton wrote:
> > On Mon, 11 Jun 2007 13:44:42 -0700
> > "Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
> >
> > > In the first implementation of ours, we h
On Mon, Jun 11, 2007 at 02:14:49PM -0700, Andrew Morton wrote:
> On Mon, 11 Jun 2007 13:44:42 -0700
> "Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
>
> > In the first implementation of ours, we had used mempools api's to
> > allocate memory and we were told that mempools with GFP_ATOMIC is
>
On Tue, 12 Jun 2007, Andi Kleen wrote:
> > If the only option is to panic then something's busted. If it's network IO
> > then there should be a way of dropping the frame. If it's disk IO then we
> > should report the failure and cause an IO error.
>
> An block IO error is basically catastrophi
On Tue, Jun 12, 2007 at 12:25:57AM +0200, Andi Kleen wrote:
>
> > Please advice.
>
> I think the short term only safe option would be to fully preallocate an
> aperture.
> If it is too small you can try GFP_ATOMIC but it would be just
> a unreliable fallback. For safety you could perhaps have so
On Tue, Jun 12, 2007 at 12:25:57AM +0200, Andi Kleen wrote:
>
> > Please advice.
>
> I think the short term only safe option would be to fully preallocate an
> aperture.
> If it is too small you can try GFP_ATOMIC but it would be just
> a unreliable fallback. For safety you could perhaps have so
>
> If the only option is to panic then something's busted. If it's network IO
> then there should be a way of dropping the frame. If it's disk IO then we
> should report the failure and cause an IO error.
An block IO error is basically catastrophic for the system too. There isn't
really
a co
> Please advice.
I think the short term only safe option would be to fully preallocate an
aperture.
If it is too small you can try GFP_ATOMIC but it would be just
a unreliable fallback. For safety you could perhaps have some kernel thread
that tries to enlarge it in the background depending on c
On Mon, Jun 11, 2007 at 02:14:49PM -0700, Andrew Morton wrote:
> >
> > Again, if dma_map_{single|sg} API's fails due to
> > failure to allocate memory, the only thing that can
> > be done is to panic as this is what few of the other
> > IOMMU implementation is doing today.
>
> If the only opti
On Mon, Jun 11, 2007 at 02:29:56PM -0700, Christoph Lameter wrote:
> On Mon, 11 Jun 2007, Keshavamurthy, Anil S wrote:
>
> > Hence, can I assume that the conclusion of this
> > discussion is to use kmem_cache_alloc() functions
> > to allocate memory in dma_map_{single|sg} API's?
>
>
> Use the
On Mon, 11 Jun 2007, Keshavamurthy, Anil S wrote:
> Hence, can I assume that the conclusion of this
> discussion is to use kmem_cache_alloc() functions
> to allocate memory in dma_map_{single|sg} API's?
Use the page allocator for page size allocations. If you need to have
specially aligned me
On Mon, 11 Jun 2007 13:44:42 -0700
"Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
> In the first implementation of ours, we had used mempools api's to
> allocate memory and we were told that mempools with GFP_ATOMIC is
> useless and hence in the second implementation we came up with
> resourc
On Sat, Jun 09, 2007 at 11:47:23AM +0200, Andi Kleen wrote:
>
> > > Now there is a anon dirty limit since a few releases, but I'm not
> > > fully convinced it solves the problem completely.
> >
> > A gut feeling or is there more?
>
> Lots of other subsystem can allocate a lot of memory
> and the
On Sun, 10 Jun 2007, Arjan van de Ven wrote:
> Christoph Lameter wrote:
> > On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
> >
> > > > What functionality are you missing in the page allocator? It seems that
> > > > is does what you want?
> > > Humm..I basically want to allocate memory during in
Christoph Lameter wrote:
On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
What functionality are you missing in the page allocator? It seems that is
does what you want?
Humm..I basically want to allocate memory during interrupt context and
expect not to fail. I know this is a hard requirement
> > Now there is a anon dirty limit since a few releases, but I'm not
> > fully convinced it solves the problem completely.
>
> A gut feeling or is there more?
Lots of other subsystem can allocate a lot of memory
and they usually don't cooperate and have similar dirty limit concepts.
So you coul
On Sat, 9 Jun 2007, Andi Kleen wrote:
> > Why was it not allowed? Because interrupts are disabled?
>
> Allocating memory during page out under low memory could
> lead to deadlocks. That is because Linux used to make no attempt
> to limit dirty pages for anonymous mappings and then you could
> en
On Saturday 09 June 2007 00:36, Christoph Lameter wrote:
> On Fri, 8 Jun 2007, Andreas Kleen wrote:
> > > That's what kmem_cache_alloc() is for?!?!
> >
> > Tradtionally that was not allowed in block layer path. Not sure
> > it is fully obsolete with the recent dirty tracking work, probably not.
>
>
On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
> > What functionality are you missing in the page allocator? It seems that is
> > does what you want?
> Humm..I basically want to allocate memory during interrupt context and
> expect not to fail. I know this is a hard requirement :)
The page al
On Fri, Jun 08, 2007 at 03:33:39PM -0700, Christoph Lameter wrote:
> On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
>
> > > You _seem_ to be saying that the resource pools are there purely for
> > > alloc/free performance reasons. If so, I'd be skeptical: slab is pretty
> > > darned fast.
> > W
On Fri, Jun 08, 2007 at 03:32:08PM -0700, Christoph Lameter wrote:
> On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
>
> > In the IOMMU case, we need exactly opposite of what mempool provides,
> > i.e we always want to look for the element in the pool and if the pool
> > has no element then go to
On Fri, 8 Jun 2007, Andreas Kleen wrote:
> > That's what kmem_cache_alloc() is for?!?!
>
> Tradtionally that was not allowed in block layer path. Not sure
> it is fully obsolete with the recent dirty tracking work, probably not.
Why was it not allowed? Because interrupts are disabled?
> Beside
On Fri, 8 Jun 2007, Siddha, Suresh B wrote:
> > If for some reason you really can't do that (and a requirement for
> > allocation-in-interrupt is the only valid reason, really) and if you indeed
> > can demonstrate memory allocation failures with certain workloads then
> > let's take a look at tha
On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
> > You _seem_ to be saying that the resource pools are there purely for
> > alloc/free performance reasons. If so, I'd be skeptical: slab is pretty
> > darned fast.
> We need several objects of size say( 4 * sizeof(u64)) and reuse
> them in dma ma
On Fri, 8 Jun 2007, Keshavamurthy, Anil S wrote:
> In the IOMMU case, we need exactly opposite of what mempool provides,
> i.e we always want to look for the element in the pool and if the pool
> has no element then go to OS as a worst case. This resource pool
> library routines do the same. Again
On Friday 08 June 2007 22:55, Andrew Morton wrote:
> On Fri, 8 Jun 2007 22:43:10 +0200 (CEST)
>
> Andreas Kleen <[EMAIL PROTECTED]> wrote:
> > > That's what kmem_cache_alloc() is for?!?!
> >
> > Tradtionally that was not allowed in block layer path. Not sure
> > it is fully obsolete with the recent
On Fri, Jun 08, 2007 at 02:42:07PM -0700, Andrew Morton wrote:
> I'd say just remove the whole thing and use kmem_cache_alloc().
We will try that.
> Put much effort into removing the GFP_ATOMIC and using GFP_NOIO instead:
> there's your problem right there.
As these are called from interrupt han
Andrew Morton wrote:
Put much effort into removing the GFP_ATOMIC and using GFP_NOIO instead:
there's your problem right there.
If for some reason you really can't do that (and a requirement for
allocation-in-interrupt is the only valid reason, really)
and that's the case here; IO gets submitt
On Fri, 8 Jun 2007 14:20:54 -0700
"Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
> > This means mempools don't work for those (the previous version had non
> > sensical
> > constructs like GFP_ATOMIC mempool calls)
> >
> > __I haven't looked at Anil's code, but I suspect the only really robus
On Fri, Jun 08, 2007 at 10:43:10PM +0200, Andreas Kleen wrote:
> Am Fr 08.06.2007 21:01 schrieb Andrew Morton
> <[EMAIL PROTECTED]>:
>
> > On Fri, 8 Jun 2007 11:21:57 -0700
> > "Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
> >
> > > On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrot
On Fri, 8 Jun 2007 22:43:10 +0200 (CEST)
Andreas Kleen <[EMAIL PROTECTED]> wrote:
> > That's what kmem_cache_alloc() is for?!?!
>
> Tradtionally that was not allowed in block layer path. Not sure
> it is fully obsolete with the recent dirty tracking work, probably not.
>
> Besides it would need
On Fri, Jun 08, 2007 at 01:12:00PM -0700, Keshavamurthy, Anil S wrote:
> The resource pool indeed provide extra robustness, the initial pool size will
> be equal to min_count + grow_count. If the pool object count goes below
> min_count, then pool grows in the background while serving as emergency
On Fri, 8 Jun 2007 13:12:00 -0700
"Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
> On Fri, Jun 08, 2007 at 12:01:07PM -0700, Andrew Morton wrote:
> > On Fri, 8 Jun 2007 11:21:57 -0700
> > "Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
> >
> > > On Thu, Jun 07, 2007 at 04:27:26PM -0700, An
Am Fr 08.06.2007 21:01 schrieb Andrew Morton
<[EMAIL PROTECTED]>:
> On Fri, 8 Jun 2007 11:21:57 -0700
> "Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
>
> > On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > > On Wed, 06 Jun 2007 11:57:00 -0700
> > > [EMAIL PROTECTED] wrote:
>
On Fri, Jun 08, 2007 at 12:01:07PM -0700, Andrew Morton wrote:
> On Fri, 8 Jun 2007 11:21:57 -0700
> "Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
>
> > On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > > On Wed, 06 Jun 2007 11:57:00 -0700
> > > [EMAIL PROTECTED] wrote:
> > >
On Fri, 8 Jun 2007 11:21:57 -0700
"Keshavamurthy, Anil S" <[EMAIL PROTECTED]> wrote:
> On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> > On Wed, 06 Jun 2007 11:57:00 -0700
> > [EMAIL PROTECTED] wrote:
> >
> > > Signed-off-by: Anil S Keshavamurthy <[EMAIL PROTECTED]>
> >
> > That
On Thu, Jun 07, 2007 at 04:27:26PM -0700, Andrew Morton wrote:
> On Wed, 06 Jun 2007 11:57:00 -0700
> [EMAIL PROTECTED] wrote:
>
> > Signed-off-by: Anil S Keshavamurthy <[EMAIL PROTECTED]>
>
> That was a terse changelog.
>
> Obvious question: how does this differ from mempools, and would it be
>
On Wed, 06 Jun 2007 11:57:00 -0700
[EMAIL PROTECTED] wrote:
> Signed-off-by: Anil S Keshavamurthy <[EMAIL PROTECTED]>
That was a terse changelog.
Obvious question: how does this differ from mempools, and would it be
better to fill in any gaps in mempool functionality instead of
implementing some
Signed-off-by: Anil S Keshavamurthy <[EMAIL PROTECTED]>
---
include/linux/respool.h | 43 +
lib/Makefile|1
lib/respool.c | 222
3 files changed, 266 insertions(+)
Index: linux-2.6.22-rc3/include/linux/respool.
46 matches
Mail list logo