[ceph-users] Re: SSD recommendations for RBD and VM's

2021-05-30 Thread huxia...@horebdata.cn
I would recommend Intel S4510 series, which has power loss protection (PLP). If you do not care about PLP, lower-cost Samsung 870EVO and Crucial MX500 should also be OK (with separate DB/WAL on enterprise SSD with PLP) Samuel huxia...@horebdata.cn From: by morphin Date: 2021-05-30 02:48 T

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-05-30 Thread Peter Childs
I've actually managed to get a little further with my problem. As I've said before these servers are slightly distorted in config. 63 drives and only 48g if memory. Once I create about 15-20 osds it continues to format the disks but won't actually create the containers or start any service. Wor

[ceph-users] Re: [External Email] Re: XFS on RBD on EC painfully slow

2021-05-30 Thread Dave Hall
Reed, I'd like to add to Sebastian's comments - the problem is probably rsync. I inherited a smaller setup than you when I assumed my current responsibilities - an XFS file system on a RAID and exported over NFS. The backup process is based on RSnapshot, which is based on rsync over SSH, but the

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-05-30 Thread mhnx
Hello Samuel. Thanks for the answer. Yes the Intel S4510 series is a good choice but it's expensive. I have 21 server and data distribution is quite well. At power loss I don't think I'll lose data. All the VM's using same image and the rest is cookie. In this case I'm not sure I should spend extr

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-05-30 Thread huxia...@horebdata.cn
Pls check Crucial MX500 2TB drive, i think it is a bit cheaper than Samsung 970 EVO, and it is reliable as well. samuel From: mhnx Date: 2021-05-30 20:45 To: huxia...@horebdata.cn CC: Anthony D'Atri; ceph-users Subject: Re: [ceph-users] Re: SSD recommendations for RBD and VM's Hello Samuel. Th

[ceph-users] Cephadm/docker or install from packages

2021-05-30 Thread Stanislav Datskevych
Hi all, I want to ask your opinion which Ceph deploy version is better: using cephadm(docker) or installing from packages? Cephadm brings lots of convenience: easy upgrade (which sometimes stucks but still), easy add new OSDs, ability to make placement policies etc. In contrary I seem to lo