On Mon, Jan 11, 2021 at 06:05:13PM +0100, Christoph Hellwig wrote:
> @@ -486,14 +491,22 @@ int cdev_add(struct cdev *p, dev_t dev, unsigned count)
>       if (WARN_ON(dev == WHITEOUT_DEV))
>               return -EBUSY;
>  
> -     error = kobj_map(cdev_map, dev, count, NULL,
> -                      exact_match, exact_lock, p);
> -     if (error)
> -             return error;
> +     mutex_lock(&chrdevs_lock);
> +     for (i = 0; i < count; i++) {
> +             error = xa_insert(&cdev_map, dev + i, p, GFP_KERNEL);
> +             if (error)
> +                     goto out_unwind;
> +     }
> +     mutex_unlock(&chrdevs_lock);

Looking at some of the users ...

#define BSG_MAX_DEVS            32768
...
        ret = cdev_add(&bsg_cdev, MKDEV(bsg_major, 0), BSG_MAX_DEVS);

So this is going to allocate 32768 entries; at 8 bytes each, that's 256kB.
With XArray overhead, it works out to 73 pages or 292kB.  While I don't
have bsg loaded on my laptop, I imagine a lot of machines do.

drivers/net/tap.c:#define TAP_NUM_DEVS (1U << MINORBITS)
include/linux/kdev_t.h:#define MINORBITS        20
drivers/net/tap.c:      err = cdev_add(tap_cdev, *tap_major, TAP_NUM_DEVS);

That's going to be even worse -- 8MB plus the overhead to be closer to 9MB.

I think we do need to implement the 'store a range' option ;-(

Reply via email to