Herbert Xu <herb...@gondor.apana.org.au> wrote:
> On Tue, Feb 07, 2017 at 02:17:28PM +0100, Florian Westphal wrote:
> >
> > Ok, but why?
> 
> Because people expect the hash table insertion to succeed, even
> on softirq paths where you cannot vmalloc.

I can't really say anything here because *I* don't expect
it to succeed.

> > It seems to add a whole lot of complexity...
> > 
> > What users can't handle the insert failure case until resize
> > has completed?
> 
> Users that need to insert on softirq that cannot throttle the
> rate.

Even with this proposed patch things will eventually fail
on OOM conditions.

Also, such period should be very short until rht has reached
peak size for the workload.

> > Would relaxing the max chain length (until rehash is done) be an
> > alternative?
> 
> Considering that this is intended for users that cannot throttle
> the rate of insertion, I think we'd be much better off just failing
> them than sticking them on what will essentially be a linked list.

I think that would depend on the user and the requirement, but
I don't know of any such users.

I get impression thatan (r)hashtable might be the wrong data
structure for this in first place.

Also, given that we could easily oversubscribe a table by a factor
of 10 or more while still keeping sane chain lengths I don't
see why thats a problem (also, a 'rht_insert_force' or similar
interface that doesn't do chain length checks makes it
easy to spot places that need/want this behaviour).

> As people don't like insertion failure, I think this level of
> complexity is justified.

I am not sure.

I think rhastable is already bloated; I can't say I can understand
all of the checks and knobs it has without looking at git history.

(insecure_elasticity and/or insecure_max_entries come to mind, seems
 some of that might not even be needed anymore but I don't have time
 right now to investigate).

Reply via email to