On Sun, 20 Aug 2017 08:38:54 +0200 Sinan Polat wrote:
> What has DWPD to do with performance / IOPS? The SSD will just fail earlier,
> but it should not have any affect on the performance, right?
>
Nothing, I listed BOTH reasons why these are unsuitable.
You just don't buy something huge like 4
Hi,
I have a question about Ceph's performance
I've built a Ceph cluster with 3 OSD host, each host's configuration:
- CPU: 1 x Intel Xeon E5-2620 v4 2.1GHz
- Memory: 2 x 16GB RDIMM
- Disk: 2 x 300GB 15K RPM SAS 12Gbps (RAID 1 for OS)
4 x 800GB Solid State Drive SATA (non-RAID for
Hello,
On Sun, 20 Aug 2017 18:07:09 +0700 Sam Huracan wrote:
> Hi,
>
> I have a question about Ceph's performance
You really, really want to do yourself a favor and research things (aka
googling the archives of this ML).
Not a week or a month goes by with somebody asking this question.
> I've
> SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - MZ-
> 75E4T0B/AM | Samsung
The performance difference between these and the SM or PM863 range is night and
day. I would not use these for anything you care about with performance,
particularly IOPS or latency.
Their write lat
On Mon, 21 Aug 2017 01:48:49 + Adrian Saul wrote:
> > SSD make details : SSD 850 EVO 2.5" SATA III 4TB Memory & Storage - MZ-
> > 75E4T0B/AM | Samsung
>
> The performance difference between these and the SM or PM863 range is night
> and day. I would not use these for anything you care abo
You can get rpm from here
https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/old/2.3.0/CentOS/nfs-ganesha.repo
You have to fix the path mismatch error in the repo file manually.
> On Aug 20, 2017, at 5:38 AM, Marc Roos wrote:
>
>
>
> Where can you get the nfs-ganesha-ceph rpm? Is
Hi, Thank you for response.
Details of my pool is below:
pool 2 'volumes' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 627 flags hashpspool
stripe_width 0
removed_snaps [1~3]
My test case was about a scenario of disaster. I think tha
Hi all,
I'm in the process of building a ceph cluster, primarily to use cephFS. At
this stage I'm in the planning phase and doing a lot of reading on best
practices for building the cluster, however there's one question that I
haven't been able to find an answer to.
Is it better to use many host
On Mon, 21 Aug 2017 13:40:29 +0800 Nick Tan wrote:
> Hi all,
>
> I'm in the process of building a ceph cluster, primarily to use cephFS. At
> this stage I'm in the planning phase and doing a lot of reading on best
> practices for building the cluster, however there's one question that I
> haven'
On Mon, Aug 21, 2017 at 1:58 PM, Christian Balzer wrote:
> On Mon, 21 Aug 2017 13:40:29 +0800 Nick Tan wrote:
>
> > Hi all,
> >
> > I'm in the process of building a ceph cluster, primarily to use cephFS.
> At
> > this stage I'm in the planning phase and doing a lot of reading on best
> > practice
10 matches
Mail list logo