Re: [ceph-users] Ceph cluster with SSDs

2017-08-19 Thread Sinan Polat
What has DWPD to do with performance / IOPS? The SSD will just fail earlier, but it should not have any affect on the performance, right? Correct me if I am wrong, just want to learn. > Op 20 aug. 2017 om 06:03 heeft Christian Balzer het volgende > geschreven: > > DWPD _

Re: [ceph-users] Ceph cluster with SSDs

2017-08-19 Thread Christian Balzer
Hello, On Sat, 19 Aug 2017 23:22:11 +0530 M Ranga Swami Reddy wrote: > SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - > MZ-75E4T0B/AM | Samsung > And there's your answer. A bit of googling in the archives here would have shown you that these are TOTALLY unsuitable for use w

[ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7

2017-08-19 Thread Marc Roos
Where can you get the nfs-ganesha-ceph rpm? Is there a repository that has these? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] VMware + Ceph using NFS sync/async ?

2017-08-19 Thread Maged Mokhtar
Hi Nick, Interesting your note on PG locking, but I would be surprised if its effect is that bad. I would think that in your example the 2 ms is a total latency, the lock will probably be applied to small portion of that, so the concurrent operations are not serialized for the entire time..but ag

Re: [ceph-users] Ceph cluster with SSDs

2017-08-19 Thread M Ranga Swami Reddy
SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - MZ-75E4T0B/AM | Samsung On Sat, Aug 19, 2017 at 10:44 PM, M Ranga Swami Reddy wrote: > Yes, Its in production and used the pg count as per the pg calcuator @ > ceph.com. > > On Fri, Aug 18, 2017 at 3:30 AM, Mehmet wrote: >> Whi

[ceph-users] How much max size of Bluestore WAL and DB can used in the normal environment?

2017-08-19 Thread liao junwei
Hi, According to the source code of ceph11.2.0,we known that Bluestore WAL and DB are s stored in the top 4% of the OSD disk space.But I found that it didn't really need so much.I decided to modify it to 0.5%, then the meta data size was usually just less than 0.2% in the experiment.Due to

Re: [ceph-users] Ceph cluster with SSDs

2017-08-19 Thread M Ranga Swami Reddy
I did not only "osd bench". Performed rbd image mapped and DD test on it... here also got very less number with image on SSD pool as compared with image on HDD pool. As per SSD datasheet - they claim 500 MB/s, but I am getting somewhat near 50 MB/s with dd cmd. On Fri, Aug 18, 2017 at 6:32 AM, Ch

Re: [ceph-users] Ceph cluster with SSDs

2017-08-19 Thread M Ranga Swami Reddy
Yes, Its in production and used the pg count as per the pg calcuator @ ceph.com. On Fri, Aug 18, 2017 at 3:30 AM, Mehmet wrote: > Which ssds are used? Are they in production? If so how is your PG Count? > > Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy > : >> >> Hello, >> I am usin

Re: [ceph-users] Luminous radosgw hangs after a few hours

2017-08-19 Thread Martin Emrich
Hi! Apparently the message had nothing to do with the issue. It was just that after the threads affected by the SIGHUP issue crashed, the keystone-related stuff was all that’s left. Regards, Martin Am 19.08.17, 00:34 schrieb "Kamble, Nitin A" : I see the same issue with ceph v12.1.4 as w