[ceph-users] Re: PGs unknown (osd down) after conversion to cephadm

2020-04-17 Thread Dr. Marco Savoca
  "devices": "sda",    "distro": "centos",    "distro_description": "CentOS Linux 8 (Core)",    "distro_version": "8",    "hostname": "ceph1",    "kernel_description"

[ceph-users] Re: PGs unknown (osd down) after conversion to cephadm

2020-04-15 Thread Dr. Marco Savoca
26:53.664098"}, {"container_image_id": "204a01f9b0b6710dd0c0af7f37ce7139c47ff0f0105d778d7104c69282dfbbf1", "container_image_name": "docker.io/ceph/ceph:v15", "service_name": "mon", "size": 0, "running": 3, "last_

[ceph-users] PGs unknown (osd down) after conversion to cephadm

2020-04-09 Thread Dr. Marco Savoca
Hi all, last week I successfully upgraded my cluster to Octopus and converted it to cephadm. The conversion process (according to the docs) went well and the cluster ran in an active+clean status. But after a reboot all osd went down with a delay of a couple of minutes after reboot and all (100%) o

[ceph-users] Re: New 3 node Ceph cluster

2020-03-15 Thread Dr. Marco Savoca
Hi Jesper, can you state your suggestion more precisely? I have a similar setup and I’m also interested. If i understand you right, you suggest to create an RBD image for data that attachs to a VM with installed samba Server. But what would be the „best“ way to connect? Kernel module mapping o