+1 for this. I messed up a cap on a cluster I was configuring doing this
same thing. Luckily it wasn't production and I could fix it quickly.

On Thu, Feb 22, 2018, 8:09 PM Gregory Farnum <gfar...@redhat.com> wrote:

> On Wed, Feb 21, 2018 at 10:54 AM, Enrico Kern
> <enrico.k...@glispamedia.com> wrote:
> > Hey all,
> >
> > i would suggest some changes to the ceph auth caps command.
> >
> > Today i almost fucked up half of one of our openstack regions with i/o
> > errors because of user failure.
> >
> > I tried to add osd blacklist caps to a cinder keyring after luminous
> > upgrade.
> >
> > I did so by issuing ceph auth caps client.cinder mon 'bla'
> >
> > doing this i forgot that this will wipe also other caps and not just only
> > updates caps for mon because you need to specify all in one line. Result
> was
> > all of our vms passing out with read only filesystems after a while
> because
> > osd caps were gone.
> >
> > I suggest that if you only pass
> >
> > Ceph auth caps mon
> >
> > It only updates caps for mon or osd etc. and leaves others untouched. Or
> at
> > least print some huge error message.
> >
> > I know it is more a pebkac problem, but ceph is doing great in being
> idiot
> > proof and this would make it even more idiot proof ;)
>
> This sounds like a good idea to me! I created a ticket at
> http://tracker.ceph.com/issues/23096
> -Greg
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to