[ceph-users] Re: [ANN] A framework for deploying Octopus using cephadm in the cloud

2020-08-02 Thread Marc Roos
>>Except for your mds, mrg and radosgw, your osd daemons are bound to >>the hardware / disks they are running on. It is not like if >> osd.121 goes down, you can start it on some random node. >Why not? The data stays on the old node not? If you did automate destroy/create of a new osd, that

[ceph-users] Re: EC profile datastore usage - question

2020-08-02 Thread Mateusz Skała
Hello, I’m sorry for late response, here is output: ceph df RAW STORAGE: CLASS SIZEAVAIL USEDRAW USED %RAW USED hdd 1.0 PiB 797 TiB 267 TiB 272 TiB 25.44 TOTAL 1.0 PiB 797 TiB 267 TiB 272 TiB 25.44 POOL