Hi,
I'm running Debian 10 with btrfs-progs=5.2.1.
Creating snapshots with snapper=0.8.2 works w/o errors.
However, I run into an issue and need to restore various files.
I thought that I could simply take the files from a snapshot created before.
However, the files required don't exist in any
Hi,
my Ceph cluster is in unhealthy state and busy with recovery.
I'm observing the MGR log and this is showing this error message regularely:
2019-11-20 09:51:45.211 7f7205581700 0 auth: could not find secret_id=4193
2019-11-20 09:51:45.211 7f7205581700 0 cephx: verify_authorizer could
not get
Hi,
I try to enable pg_autoscale_mode on a specific pool of my cluster,
however this returns an error.
root@ld3955:~# ceph osd pool set ssd pg_autoscale_mode on
Error EINVAL: must set require_osd_release to nautilus or later before
setting pg_autoscale_mode
The error message is clear, but my clus
Hello Paul,
I didn't skip this step.
Actually I'm sure that everything on Cluster is on Nautilus because I
had issues with SLES 12SP2 Clients that failed to connect due to
outdated client tools that could not connect to Nautilus.
Would it make sense to execute
ceph osd require-osd-release nautil
Looks like the flag is not correct.
root@ld3955:~# ceph osd dump | grep nautilus
root@ld3955:~# ceph osd dump | grep require
require_min_compat_client luminous
require_osd_release luminous
Am 21.11.2019 um 13:51 schrieb Paul Emmerich:
> "ceph osd dump" shows you if the flag is set
>
>
> Paul
>
Update:
Issue is solved.
The output of "ceph osd dump" showed that the required setting was
incorrect, means
require_osd_release luminous
After executing
ceph osd require-osd-release nautilus
I can enable pg_autoscale_mode on any pool.
THX
Am 21.11.2019 um 13:51 schrieb Paul Emmerich:
> "ceph o
Hi,
command ceph osd df does not return any output.
Based on the strace output there's a timeout.
[...]
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0x7f53006b9000
brk(0x55c2579b6000) = 0x55c2579b6000
brk(0x55c2579d7000) = 0x55
; I had this when testing pg_autoscaler, after some time every command
> would hang. Restarting the MGR helped for a short period of time, then
> I disabled pg_autoscaler. This is an upgraded cluster, currently on
> Nautilus.
>
> Regards,
> Eugen
>
>
> Zitat von Thomas
Hi,
I enabled pg_autoscaler on a specific pool ssd.
I failed to increase pg_num / pgp_num on pools ssd to 1024:
root@ld3955:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO
TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 395.8