Re: [ceph-users] Ceph cluster with SSDs

2017-08-20 Thread Christian Balzer
On Sun, 20 Aug 2017 08:38:54 +0200 Sinan Polat wrote: > What has DWPD to do with performance / IOPS? The SSD will just fail earlier, > but it should not have any affect on the performance, right? > Nothing, I listed BOTH reasons why these are unsuitable. You just don't buy something huge like 4

[ceph-users] Ceph Random Read Write Performance

2017-08-20 Thread Sam Huracan
Hi, I have a question about Ceph's performance I've built a Ceph cluster with 3 OSD host, each host's configuration: - CPU: 1 x Intel Xeon E5-2620 v4 2.1GHz - Memory: 2 x 16GB RDIMM - Disk: 2 x 300GB 15K RPM SAS 12Gbps (RAID 1 for OS) 4 x 800GB Solid State Drive SATA (non-RAID for

Re: [ceph-users] Ceph Random Read Write Performance

2017-08-20 Thread Christian Balzer
Hello, On Sun, 20 Aug 2017 18:07:09 +0700 Sam Huracan wrote: > Hi, > > I have a question about Ceph's performance You really, really want to do yourself a favor and research things (aka googling the archives of this ML). Not a week or a month goes by with somebody asking this question. > I've

Re: [ceph-users] Ceph cluster with SSDs

2017-08-20 Thread Adrian Saul
> SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - MZ- > 75E4T0B/AM | Samsung The performance difference between these and the SM or PM863 range is night and day. I would not use these for anything you care about with performance, particularly IOPS or latency. Their write lat

Re: [ceph-users] Ceph cluster with SSDs

2017-08-20 Thread Christian Balzer
On Mon, 21 Aug 2017 01:48:49 + Adrian Saul wrote: > > SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - MZ- > > 75E4T0B/AM | Samsung > > The performance difference between these and the SM or PM863 range is night > and day. I would not use these for anything you care abo

Re: [ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7

2017-08-20 Thread TYLin
You can get rpm from here https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/old/2.3.0/CentOS/nfs-ganesha.repo You have to fix the path mismatch error in the repo file manually. > On Aug 20, 2017, at 5:38 AM, Marc Roos wrote: > > > > Where can you get the nfs-ganesha-ceph rpm? Is

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-08-20 Thread Hyun Ha
Hi, Thank you for response. Details of my pool is below: pool 2 'volumes' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 627 flags hashpspool stripe_width 0 removed_snaps [1~3] My test case was about a scenario of disaster. I think tha

[ceph-users] pros/cons of multiple OSD's per host

2017-08-20 Thread Nick Tan
Hi all, I'm in the process of building a ceph cluster, primarily to use cephFS. At this stage I'm in the planning phase and doing a lot of reading on best practices for building the cluster, however there's one question that I haven't been able to find an answer to. Is it better to use many host

Re: [ceph-users] pros/cons of multiple OSD's per host

2017-08-20 Thread Christian Balzer
On Mon, 21 Aug 2017 13:40:29 +0800 Nick Tan wrote: > Hi all, > > I'm in the process of building a ceph cluster, primarily to use cephFS. At > this stage I'm in the planning phase and doing a lot of reading on best > practices for building the cluster, however there's one question that I > haven'

Re: [ceph-users] pros/cons of multiple OSD's per host

2017-08-20 Thread Nick Tan
On Mon, Aug 21, 2017 at 1:58 PM, Christian Balzer wrote: > On Mon, 21 Aug 2017 13:40:29 +0800 Nick Tan wrote: > > > Hi all, > > > > I'm in the process of building a ceph cluster, primarily to use cephFS. > At > > this stage I'm in the planning phase and doing a lot of reading on best > > practice