mentioned earlier. Single Core cpu speed matters for latency so you
> >>>> probably want to up that.
> >>>>
> >>>> You can also look at the DIMM configuration.
> >>>> TBH I am not sure how much it impacts Ceph performance but having just
cache probably won’t hurt either (unless you
know your workload won’t include any cacheable reads)
Cheers,
Robert van Leeuwen
From: ceph-users on behalf of Massimiliano
Cuttini
Organization: PhoenixWeb Srl
Date: Wednesday, July 5, 2017 at 10:54 AM
To: "ceph-users@lists.ceph.com"
On 2017-07-05 23:22, David Clarke wrote:
> On 07/05/2017 08:54 PM, Massimiliano Cuttini wrote:
>
>> Dear all,
>>
>> luminous is coming and sooner we should be allowed to avoid double writing.
>> This means use 100% of the speed of SSD and NVMe.
>> Cluster made all of SSD and NVMe will not be pe
On 07/05/2017 08:54 PM, Massimiliano Cuttini wrote:
> Dear all,
>
> luminous is coming and sooner we should be allowed to avoid double writing.
> This means use 100% of the speed of SSD and NVMe.
> Cluster made all of SSD and NVMe will not be penalized and start to make
> sense.
>
> Looking forwa
x memory bandwidth.
> >> Having some extra memory for read-cache probably won’t hurt either (unless
> >> you know your workload won’t include any cacheable reads)
> >>
> >> Cheers,
> >> Robert van Leeuwen
> >>
> >> From: ceph-users on beha
extra memory for read-cache probably won’t hurt either (unless
>> you know your workload won’t include any cacheable reads)
>>
>> Cheers,
>> Robert van Leeuwen
>>
>> From: ceph-users on behalf of
>> Massimiliano Cuttini
>> Organization: PhoenixWeb Sr
On 5 July 2017 at 19:54, Wido den Hollander wrote:
> I'd probably stick with 2x10Gbit for now and use the money I saved on more
> memory and faster CPUs.
>
On the latency point. - you will get an improvement going from 10Gb to
25Gb, but stepping up to 100Gb won't significantly change things as 1
Web Srl
> Date: Wednesday, July 5, 2017 at 10:54 AM
> To: "ceph-users@lists.ceph.com"
> Subject: [ceph-users] New cluster - configuration tips and reccomendation -
> NVMe
>
>
> Dear all,
>
> luminous is coming and sooner we should be allowed to avoid double wr
Interesting point, 100Gbps PCI is x16, NVMe is x4, that's 64 PCIe lanes
required
Should work at fullrate on a dual-socket server
On 05/07/2017 11:41, Van Leeuwen, Robert wrote:
> Hi Max,
>
> You might also want to look at the PCIE lanes.
> I am not an expert on the matter but my guess would be t
AM
To: "ceph-users@lists.ceph.com"
Subject: [ceph-users] New cluster - configuration tips and reccomendation - NVMe
Dear all,
luminous is coming and sooner we should be allowed to avoid double writing.
This means use 100% of the speed of SSD and NVMe.
Cluster made all of SSD and NVMe w
Hi Massimiliano,
I am a little surprised to see 6x NVMe, 64GB of RAM, 2x100 NICs and E5-2603
v4, that's one of the cheapest E5 Intel CPU mixed with some pretty high end
gear, it does not make sense. Wildo's right go with much higher frequency:
E5-2637 v4, E5-2643 v4, E5-1660 v4, E5-1650 v4. If you
You will need CPUs as well if you want to push/fetch 200Gbps
2603 is really too short
(not really an issue, but NVMe for OS seems useless to me)
On 05/07/2017 11:02, Wido den Hollander wrote:
>
>> Op 5 juli 2017 om 10:54 schreef Massimiliano Cuttini :
>>
>>
>> Dear all,
>>
>> luminous is coming
> Op 5 juli 2017 om 10:54 schreef Massimiliano Cuttini :
>
>
> Dear all,
>
> luminous is coming and sooner we should be allowed to avoid double writing.
> This means use 100% of the speed of SSD and NVMe.
> Cluster made all of SSD and NVMe will not be penalized and start to make
> sense.
>
>
Dear all,
luminous is coming and sooner we should be allowed to avoid double writing.
This means use 100% of the speed of SSD and NVMe.
Cluster made all of SSD and NVMe will not be penalized and start to make
sense.
Looking forward I'm building the next pool of storage which we'll setup
on ne
14 matches
Mail list logo