On Tue, May 28, 2019 at 11:50:01AM -0700, Gregory Farnum wrote:
You’re the second report I’ve seen if this, and while it’s confusing,
you should be Abel to resolve it by restarting your active manager
daemon.
Maybe this is related? http://tracker.ceph.com/issues/40011
On Sun, May 26, 2
Yes, thanks. This helped.
Regards,
Lars
Tue, 28 May 2019 11:50:01 -0700
Gregory Farnum ==> Lars Täuber :
> You’re the second report I’ve seen if this, and while it’s confusing, you
> should be Abel to resolve it by restarting your active manager daemon.
>
> On Sun, May 26, 2019 at 11:52 PM Lar
You’re the second report I’ve seen if this, and while it’s confusing, you
should be Abel to resolve it by restarting your active manager daemon.
On Sun, May 26, 2019 at 11:52 PM Lars Täuber wrote:
> Fri, 24 May 2019 21:41:33 +0200
> Michel Raabe ==> Lars Täuber ,
> ceph-users@lists.ceph.com :
>
Fri, 24 May 2019 21:41:33 +0200
Michel Raabe ==> Lars Täuber ,
ceph-users@lists.ceph.com :
>
> You can also try
>
> $ rados lspools
> $ ceph osd pool ls
>
> and verify that with the pgs
>
> $ ceph pg ls --format=json-pretty | jq -r '.pg_stats[].pgid' | cut -d.
> -f1 | uniq
>
Yes, now I kno
On 20.05.19 13:04, Lars Täuber wrote:
Mon, 20 May 2019 10:52:14 +
Eugen Block ==> ceph-users@lists.ceph.com :
Hi, have you tried 'ceph health detail'?
No I hadn't. Thanks for the hint.
You can also try
$ rados lspools
$ ceph osd pool ls
and verify that with the pgs
$ ceph pg ls --fo
Mon, 20 May 2019 10:52:14 +
Eugen Block ==> ceph-users@lists.ceph.com :
> Hi, have you tried 'ceph health detail'?
>
No I hadn't. Thanks for the hint.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-us
Hi, have you tried 'ceph health detail'?
Zitat von Lars Täuber :
Hi everybody,
with the status report I get a HEALTH_WARN I don't know how to get rid of.
It my be connected to recently removed pools.
# ceph -s
cluster:
id: 6cba13d1-b814-489c-9aac-9c04aaf78720
health: HEALTH_WAR
Hi everybody,
with the status report I get a HEALTH_WARN I don't know how to get rid of.
It my be connected to recently removed pools.
# ceph -s
cluster:
id: 6cba13d1-b814-489c-9aac-9c04aaf78720
health: HEALTH_WARN
1 pools have many more objects per pg than average
s