Hello everyone

I've got a fresh ceph octopus installation and I'm trying to set up a cephfs 
with erasure code configuration.
The metadata pool was set up as default.
The erasure code pool was set up with this command:
-> ceph osd pool create ec-data_fs 128 erasure default
Enabled overwrites:
-> ceph osd pool set ec-data_fs allow_ec_overwrites true
And create fs:
-> ceph fs new ec-data_fs meta_fs ec-data_fs --force


Then I tried deploying the mds, but this fails:
-> ceph orch daemon add mds ec-data_fs magma01
returns:
-> Deployed mds.ec-data_fs.magma01.ujpcly on host 'magma01'

The mds daemon is not there.

Aparently the container dies without any information, as seen in the journal:

May 25 16:11:56 magma01 podman[9348]: 2020-05-25 16:11:56.670510456 +0200 CEST 
m=+0.186462913 container create 
0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 
(image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:56 magma01 systemd[1]: Started 
libpod-conmon-0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90.scope.
May 25 16:11:56 magma01 systemd[1]: Started libcontainer container 
0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90.
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.112182262 +0200 CEST 
m=+0.628134873 container init 
0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 
(image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.137011897 +0200 CEST 
m=+0.652964354 container start 
0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 
(image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.137110412 +0200 CEST 
m=+0.653062853 container attach 
0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 
(image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:57 magma01 systemd[1]: 
libpod-0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90.scope: 
Consumed 327ms CPU time
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.182968802 +0200 CEST 
m=+0.698921275 container died 
0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 
(image=docker.io/ceph/ceph:v15, name=competent_cori)
May 25 16:11:57 magma01 podman[9348]: 2020-05-25 16:11:57.413743787 +0200 CEST 
m=+0.929696266 container remove 
0fdf8c508b330adac713ffb04c72b5df770277ad191d844888f7387f28e3cc90 
(image=docker.io/ceph/ceph:v15, name=competent_cori)

Can someone help me debugging this?

Cheers
Simon

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to