On Wed, Nov 28, 2012 at 07:46:31PM -0500, Benjamin LaHaise wrote:
> Hi Kent,
>
> On Wed, Nov 28, 2012 at 08:43:36AM -0800, Kent Overstreet wrote:
> > + * now it's safe to cancel any that need to be.
> > + */
> > +static void free_ioctx(struct kioctx *ctx)
> ...
> > + aio_nr -= ctx->max_reqs;
> >
Hi Kent,
On Wed, Nov 28, 2012 at 08:43:36AM -0800, Kent Overstreet wrote:
> + * now it's safe to cancel any that need to be.
> + */
> +static void free_ioctx(struct kioctx *ctx)
...
> + aio_nr -= ctx->max_reqs;
> + spin_unlock(&aio_nr_lock);
> +
> + synchronize_rcu();
> +
> + pr_de
On Wed, Nov 28, 2012 at 04:17:59PM -0800, Zach Brown wrote:
>
> > struct kioctx {
> > atomic_tusers;
> > - int dead;
> > + atomic_tdead;
>
> Do we want to be paranoid and atomic_set() that to 0 when the ioctx is
> allocated?
I suppose
> struct kioctx {
> atomic_tusers;
> - int dead;
> + atomic_tdead;
Do we want to be paranoid and atomic_set() that to 0 when the ioctx is
allocated?
> + while (!list_empty(&ctx->active_reqs)) {
> + struct list_hea
The usage of ctx->dead was fubar - it makes no sense to explicitly
check it all over the place, especially when we're already using RCU.
Now, ctx->dead only indicates whether we've dropped the initial
refcount. The new teardown sequence is:
set ctx->dead
hlist_del_rcu();
synchronize_rcu();
Now we
5 matches
Mail list logo