You’re the second report I’ve seen if this, and while it’s confusing, you
should be Abel to resolve it by restarting your active manager daemon.

On Sun, May 26, 2019 at 11:52 PM Lars Täuber <taeu...@bbaw.de> wrote:

> Fri, 24 May 2019 21:41:33 +0200
> Michel Raabe <rmic...@devnu11.net> ==> Lars Täuber <taeu...@bbaw.de>,
> ceph-users@lists.ceph.com :
> >
> > You can also try
> >
> > $ rados lspools
> > $ ceph osd pool ls
> >
> > and verify that with the pgs
> >
> > $ ceph pg ls --format=json-pretty | jq -r '.pg_stats[].pgid' | cut -d.
> > -f1 | uniq
> >
>
> Yes, now I know but I still get this:
> $ sudo ceph -s
> […]
>   data:
>     pools:   5 pools, 1153 pgs
> […]
>
>
> and with all other means I get:
> $ sudo ceph osd lspools | wc -l
> 3
>
> Which is what I expect, because all other pools are removed.
> But since this has no bad side effects I can live with it.
>
> Cheers,
> Lars
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to