p_from 13 up_thru 23 down_at 0 last_clean_interval
[0,0) 10.8.32.182:6800/4361 10.8.32.182:6801/4361 10.8.32.182:6802/4361
10.8.32.182:6803/4361 exi
sts,up 905d17fc-6f37-4404-bd5d-4adc231c49b3
Le mar. 18 juin 2019 à 12:38, Vincent Pharabot
a écrit :
> Thanks Eugen for answering
>
> Yes
previous cluster, I assume.
> I would remove it from crush if it's still there (check just to make
> sure), wipe the disk, remove any traces like logical volumes (if it
> was a ceph-volume lvm OSD) and if possible, reboot the node.
>
> Regards,
> Eugen
>
>
> Zitat v
Hello
I have an OSD which is stuck in booting state.
I find out that the daemon osd cluster_fsid is not the same that the actual
cluster fsid, which should explain why it does not join the cluster
# ceph daemon osd.0 status
{
"cluster_fsid": "bb55e196-eedd-478d-99b6-1aad00b95f2a",
"osd_fsid": "01
Woaw ok thanks a lot i missed that in the doc...
Le jeu. 13 juin 2019 à 16:49, Konstantin Shalygin a écrit :
> Hello,
>
> I would like to modify Bluestore label of an OSD, is there a way to do this
> ?
>
> I so that we could diplay them with "ceph-bluestore-tool show-label" but i
> did not find
Hello,
I would like to modify Bluestore label of an OSD, is there a way to do this
?
I so that we could diplay them with "ceph-bluestore-tool show-label" but i
did not find anyway to modify them...
Is it possible ?
I changed LVM tags but that don't help with bluestore labels..
# ceph-bluestore
Ok it seems that OSD pools are finally to correctly restored (quite same
amount of objects/datas than before restoration) so that should explain the
situation
I will have to dig why
Vincent
Le ven. 7 juin 2019 à 08:41, Vincent Pharabot
a écrit :
> Hello Cephers,
>
>
>
>
Hello Cephers,
I’m trying to understand Cephfs design and specially how file system view
reflect OSD pools in order to perform a backup / restore operation (VM
context).
I’m able to backup and restore sucessfully OSDs but I have some issues with
filesystem layer.
When I create files after ba