[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-07 Thread Ed Kalk
We have no one currently using containers for anything. *Therefore, we run old CEPH code to avoid them. If there was an option to not do containers on modern CEPH, that would be better for alot of people who don't want them. -Ed On 6/7/2021 2:54 AM, Eneko Lacunza wrote: Hi Marc, El 4/6/21 a

[ceph-users] Location of Crush Map and CEPH metadata

2021-03-12 Thread Ed Kalk
Hello, I have been googling for the answer to this and not found it. Does anyone know this? Where does CEPH store the crush map and the critical cluster metadata? What prevents a loss of this metadata when a node is lost? -- Thank you for your time, Edward H. Kalk IV Information Technology

[ceph-users] Upgrade options and *request for comment

2020-08-19 Thread Ed Kalk
Hello Ceph Users, We have a Mimic 13.2.10 cluster on Ubuntu 16.04.03 with 6 servers, 36 OSDs. Each OSD has a 2GB memory target via ceph.conf (30 spin disks with HDD type and 2 pools on them (2copy and 3copy pools), 6 SSDs in a flash pool (3copy)) Our intention is to upgrade the code level of

[ceph-users] Re: SED drives ,*how to fio test all disks, poor performance

2020-08-14 Thread Ed Kalk
screenshot attached showing the IOPs and LAtency from iostat -xtc 2 On 8/14/2020 9:09 AM, Ed Kalk wrote:  ubuntu@ubuntu:/mnt$ sudo fio --filename=/mnt/sda1/file1.fio:/mnt/sdb1/file2.fio:/mnt/sdc1/file3.fio:/mnt/sdd1/file4.fio:/mnt/sde1/file5.fio:/mnt/sdf1/file6.fio:/mnt/sdg1/file7.fio:/mnt/sdh1