Good morning

Q: Is it possible to have a 2nd cephfs_data volume and exposing it to the
same openstack environment?

Reason being:

Our current profile is configured with erasure code value of k=3,m=1 (rack
level) but we looking to buy another +- 6PB of storage w/ controllers and
was thinking of moving to an erasure profile of k=2,m=1 since we're not so
big on data redundancy but more on disk space + performance.
For what I understand you can't change erasure profiles, therefor we need
to essentially build a new ceph cluster but we're trying to understand if
we can attach it to the existing openstack platform, then gradually move
all the data over from the old cluster into the new cluster, destroy the
old cluster and integrated it with the new one.

If anyone has any recommendations to get more space out + performance at
the cost of data redundancy with at least 1 rack please let me know as
well.

Regards
-- 




*Jeremi-Ernst Avenant, Mr.*Cloud Infrastructure Specialist
Inter-University Institute for Data Intensive Astronomy
5th Floor, Department of Physics and Astronomy,
University of Cape Town

Tel: 021 959 4137 <0219592327>
Web: www.idia.ac.za <http://www.uwc.ac.za/>
E-mail (IDIA): jer...@idia.ac.za <mfu...@idia.ac.za>
Rondebosch, Cape Town, 7600
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to