Re: [ceph-users] 6 Node cluster with 24 SSD per node: Hardware planning / agreement

2016-10-05 Thread Denny Fuchs
hi, With Xeon E3 1245's (3.6Ghz with all 4 cores Turbo'd) and P3700 Journal with 10GB networking I have managed to get it down to around 600-700us. Make sure you force P-States and C-states as without I was only getting about 2ms. I've written it in our buy/change list :-) Ah ok, fair do's.

Re: [ceph-users] 6 Node cluster with 24 SSD per node: Hardware planning / agreement

2016-10-04 Thread Christian Balzer
Hello, replying to the original post for quoting reasons. Totally agree with what the others (Nick and Burkhard) wrote. On Tue, 04 Oct 2016 15:43:18 +0200 Denny Fuchs wrote: > Hello, > > we are brand new to Ceph and planing it as our future storage for > KVM/LXC VMs as replacement for Xen /

Re: [ceph-users] 6 Node cluster with 24 SSD per node: Hardware planning / agreement

2016-10-04 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Denny Fuchs > Sent: 04 October 2016 15:51 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] 6 Node cluster with 24 SSD per node: Hardware > plannin

Re: [ceph-users] 6 Node cluster with 24 SSD per node: Hardware planning / agreement

2016-10-04 Thread Denny Fuchs
Hi, thanks for take a look :-) Am 04.10.2016 16:11, schrieb Nick Fisk: We have two goals: * High availability * Short latency for our transaction services How Low? See below re CPU's so low, what is possible without doing crazy stuff. We thinking to put the database on CEPH too, instead

Re: [ceph-users] 6 Node cluster with 24 SSD per node: Hardware planning / agreement

2016-10-04 Thread Nick Fisk
Hi, Comments inline > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Denny Fuchs > Sent: 04 October 2016 14:43 > To: ceph-users@lists.ceph.com > Subject: [ceph-users] 6 Node cluster with 24 SSD per node: Hardware plann

[ceph-users] 6 Node cluster with 24 SSD per node: Hardware planning / agreement

2016-10-04 Thread Denny Fuchs
Hello, we are brand new to Ceph and planing it as our future storage for KVM/LXC VMs as replacement for Xen / DRBD / Pacemaker / Synology (NFS) stuff. We have two goals: * High availability * Short latency for our transaction services * For later: replication to different datacenter connect