Quoting Ml Ml (mliebher...@googlemail.com):
> Hello Stefan,
> 
> The status was "HEALTH_OK" before i ran those commands.

\o/

> root@ceph01:~# ceph osd crush rule dump
> [
>     {
>         "rule_id": 0,
>         "rule_name": "replicated_ruleset",
>         "ruleset": 0,
>         "type": 1,
>         "min_size": 1,
>         "max_size": 10,
>         "steps": [
>             {
>                 "op": "take",
>                 "item": -1,
>                 "item_name": "default"
>             },
>             {
>                 "op": "chooseleaf_firstn",
>                 "num": 0,
>                 "type": "host"


^^ This is the important part ... host as failure domain (not osd), but
that's fine in your case.

Make sure you only remove OSDs within the same failure domain at a time and
your safe.

Gr. Stefan

-- 
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / i...@bit.nl
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to