Hello Tim! First of all, thanks for the detailed answer!
Yes, probably in set up of 4 nodes by 116 OSD it looks a bit overloaded, but 
what if I have 10 nodes? Yes, nodes itself are still heavy but in a row it 
seems to be not that dramatic, no?

However, in a docu I see that it is quite common for systemd to fail on boot 
and even showed a way to escape.

```
It is common to have failures when a system is coming up online. The devices 
are sometimes not fully available and this unpredictable behavior may cause an 
OSD to not be ready to be used.

There are two configurable environment variables used to set the retry behavior:

CEPH_VOLUME_SYSTEMD_TRIES: Defaults to 30

CEPH_VOLUME_SYSTEMD_INTERVAL: Defaults to 5
```

But if where should I set these vars? If I set it as ENV vars in bashrc of root 
it doesnt seem to work as ceph starts at the boot time when root env vars are 
not active yet...
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to