Hi Matteo,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Matteo Dacrema
> Sent: 11 November 2016 10:57
> To: Christian Balzer
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] 6 Node cluster
Hi,
after your tips and consideration I’ve planned to use this hardware
configuration:
- 4x OSD ( for starting the project):
1x Intel E5-1630v4 @ 4.00 Ghz with turbo 4 core, 8 thread , 10MB cache
128GB RAM ( does frequency matter in terms of performance ? )
4x Intel P3700 2TB NVME
2x Mellanox C
2016-10-11 9:20 GMT+02:00 Дробышевский, Владимир :
> It may looks like a boys club but I believe that sometimes for the
> proof-of-concept projects or in the beginning of the commercial project
> without a lot of invesments it worth to consider used hardware. For example,
> it's possible to find us
It may looks like a boys club but I believe that sometimes for the
proof-of-concept projects or in the beginning of the commercial project
without a lot of invesments it worth to consider used hardware. For
example, it's possible to find used Quanta LB6M switches with 24x 10GbE
SFP+ ports for $298
Hello,
On Tue, 11 Oct 2016 08:30:47 +0200 Gandalf Corvotempesta wrote:
> Il 11 ott 2016 3:05 AM, "Christian Balzer" ha scritto:
> > 10Gb/s MC-LAG (white box) switches are also widely available and
> > affordable.
> >
>
> At which models are you referring to?
> I've never found any 10gb switche
Il 11 ott 2016 3:05 AM, "Christian Balzer" ha scritto:
> 10Gb/s MC-LAG (white box) switches are also widely available and
> affordable.
>
At which models are you referring to?
I've never found any 10gb switches at less than many thousands euros.
The cheaper ones i've found are the Cisco small bu
Hello,
On Mon, 10 Oct 2016 14:56:40 +0200 Matteo Dacrema wrote:
> Hi,
>
> I’m planning a similar cluster.
> Because it’s a new project I’ll start with only 2 node cluster witch each:
>
As Wido said, that's a very dense and risky proposition for a first time
cluster.
Never mind the lack of 3rd
> Op 10 oktober 2016 om 14:56 schreef Matteo Dacrema :
>
>
> Hi,
>
> I’m planning a similar cluster.
> Because it’s a new project I’ll start with only 2 node cluster witch each:
>
2 nodes in a Ceph cluster is way to small in my opinion.
I suggest that you take a lot more smaller nodes with l
Hi,
I’m planning a similar cluster.
Because it’s a new project I’ll start with only 2 node cluster witch each:
2x E5-2640v4 with 40 threads total @ 3.40Ghz with turbo
24x 1.92 TB Samsung SM863
128GB RAM
3x LSI 3008 in IT mode / HBA for OSD - 1 each 8 OSD/SDDs
2x SSD for OS
2x 40Gbit/s NIC
What
God morning,
>> * 2 x SN2100 100Gb/s Switch 16 ports
> Which incidentally is a half sized (identical HW really) Arctica 3200C.
really never heart from them :-) (and didn't find any price EUR/$
region)
>> * 10 x ConnectX 4LX-EN 25Gb card for hypervisor and OSD nodes
[...]
> You haven't commen
It would really help to have a better understanding of your applications
needs, IOPS versus bandwidth, etc.
If for example your DB transactions are small, but plentiful (something
like 2000 transactions per seconds) against a well defined and not too
larger working set and all your other I/O nee
Hello,
On Wed, 05 Oct 2016 13:43:27 +0200 Denny Fuchs wrote:
> hi,
>
> I get a call from Mellanox and we get now a offer for the following
> network:
>
> * 2 x SN2100 100Gb/s Switch 16 ports
Which incidentally is a half sized (identical HW really) Arctica 3200C.
> * 10 x ConnectX 4LX-EN 25Gb
hi,
Even better than 10G, 25GB is clocked faster than 10GB, so you should
see slightly lower latency vs 10G. Just make sure the kernel
you will be using will support those Nics.
ah, nice to know :-) Thanks for that hint !
cu denny
___
ceph-users ma
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Denny Fuchs
> Sent: 05 October 2016 12:43
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] 6 Node cluster with 24 SSD per node:
> Hardwareplanning/ agreement
hi,
I get a call from Mellanox and we get now a offer for the following
network:
* 2 x SN2100 100Gb/s Switch 16 ports
* 10 x ConnectX 4LX-EN 25Gb card for hypervisor and OSD nodes
* 4 x Adapter from Mellanox QSA to SFP+ port for interconnecting to our
HP 2920 switches
* 3 x Copper split cabl
Hi,
Am 05.10.2016 10:48, schrieb Christian Balzer:
The switch has nothing to do IPoIB, as the name implies it's entirely
native Infiniband with IP encoded onto it.
Thus its benefits from fast CPUs.
ahh, I suggested it ... :-) but on some documents from Mellanox I
thought, it has to be suppor
Hello,
On Wed, 05 Oct 2016 10:18:19 +0200 Denny Fuchs wrote:
> Hi and good morning,
>
> Am 04.10.2016 17:19, schrieb Burkhard Linke:
>
> >> * Storage NIC: 1 x Infiniband MCX314A-BCCT
> >> ** I red, that ConnectX-3 Pro is better supported, than the X-4 and a
> >> bit cheaper
> >> ** Switch: 2
Hi and good morning,
Am 04.10.2016 17:19, schrieb Burkhard Linke:
* Storage NIC: 1 x Infiniband MCX314A-BCCT
** I red, that ConnectX-3 Pro is better supported, than the X-4 and a
bit cheaper
** Switch: 2 x Mellanox SX6012 (56Gb/s)
** Active FC cables
** Maybe VPI is nice to have, but unsure.
Hi,
some thoughts about network and disks inline
On 10/04/2016 03:43 PM, Denny Fuchs wrote:
Hello,
*snipsnap*
* Storage NIC: 1 x Infiniband MCX314A-BCCT
** I red, that ConnectX-3 Pro is better supported, than the X-4 and a
bit cheaper
** Switch: 2 x Mellanox SX6012 (56Gb/s)
** Active FC
19 matches
Mail list logo