Hi all, newbie question:
The documentation seems to suggest that with ceph-volume, one OSD is created for each HDD (cf. 4-HDD-example in https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/)
This seems odd: what if a server has a finite number of disks? I was going to try cephfs on ~10 servers with 70 HDD each. That would make each system having to deal with 70 OSDs, on 70 LVs?
Really no aggregation of the disks? Regards, Thomas -- -------------------------------------------------------------------- Thomas Roth Department: IT GSI Helmholtzzentrum für Schwerionenforschung GmbH www.gsi.de _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io