Yes, it include all the available pools on the cluster:
*# ceph df*
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
53650G 42928G 10722G 19.99
POOLS:
NAMEID USED %USED MAX AVAIL OBJECTS
volumes 13 2979G 33.
On Mon, Dec 11, 2017 at 3:13 PM, German Anders wrote:
> Hi John,
>
> how are you? no problem :) . Unfortunately the error on the 'ceph fs status'
> command is still happening:
OK, can you check:
- does the "ceph df" output include all the pools?
- does restarting ceph-mgr clear the issue?
We p
Hi John,
how are you? no problem :) . Unfortunately the error on the 'ceph fs
status' command is still happening:
*# ceph fs status*
Error EINVAL: Traceback (most recent call last):
File "/usr/lib/ceph/mgr/status/module.py", line 301, in handle_command
return self.handle_fs_status(cmd)
Fi
On Mon, Dec 4, 2017 at 6:37 PM, German Anders wrote:
> Hi,
>
> I just upgrade a ceph cluster from version 12.2.0 (rc) to 12.2.2 (stable),
> and i'm getting a traceback while trying to run:
>
> # ceph fs status
>
> Error EINVAL: Traceback (most recent call last):
> File "/usr/lib/ceph/mgr/status/
Hi,
I just upgrade a ceph cluster from version 12.2.0 (rc) to 12.2.2 (stable),
and i'm getting a traceback while trying to run:
*# ceph fs status*
Error EINVAL: Traceback (most recent call last):
File "/usr/lib/ceph/mgr/status/module.py", line 301, in handle_command
return self.handle_fs_s