Re: [ceph-users] Number of SSD for OSD journal

2014-12-16 Thread Christian Balzer
On Tue, 16 Dec 2014 12:10:42 +0300 Mike wrote: > 16.12.2014 10:53, Daniel Schwager пишет: > > Hallo Mike, > > > >> This is also have another way. > >> * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to > >> each node. > >> * make tier1 read-write cache on SSDs > >> * also you ca

Re: [ceph-users] Number of SSD for OSD journal

2014-12-16 Thread Mike
16.12.2014 10:53, Daniel Schwager пишет: > Hallo Mike, > >> This is also have another way. >> * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to >> each node. >> * make tier1 read-write cache on SSDs >> * also you can add journal partition on them if you wish - then data >> will

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Daniel Schwager
Hallo Mike, > This is also have another way. > * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to > each node. > * make tier1 read-write cache on SSDs > * also you can add journal partition on them if you wish - then data > will moving from SSD to SSD before let down on HDD > * o

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Mike
15.12.2014 23:45, Sebastien Han пишет: > Salut, > > The general recommended ratio (for me at least) is 3 journals per SSD. Using > 200GB Intel DC S3700 is great. > If you’re going with a low perf scenario I don’t think you should bother > buying SSD, just remove them from the picture and do 12 S

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Christian Balzer
Hello, On Mon, 15 Dec 2014 22:43:14 +0100 Florent MONTHEL wrote: > Thanks all > > I will probably have 2x10gb : 1x10gb for client and 1x10gb for cluster > but I take in charge your recommendation Sebastien > > The 200GB SSD will probably give me around 500MB/s sequential bandwidth. Intel DC S3

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Craig Lewis
I was going with a low perf scenario, and I still ended up adding SSDs. Everything was fine in my 3 node cluster, until I wanted to add more nodes. Admittedly, I was a bit aggressive with the expansion. I added a whole node at once, rather than one or two disks at a time. Still, I wasn't expect

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Florent MONTHEL
Thanks all I will probably have 2x10gb : 1x10gb for client and 1x10gb for cluster but I take in charge your recommendation Sebastien The 200GB SSD will probably give me around 500MB/s sequential bandwidth. So with only 2 SSD I can overload 1x 10gb network. Hum I will take care of osd density

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Sebastien Han
Salut, The general recommended ratio (for me at least) is 3 journals per SSD. Using 200GB Intel DC S3700 is great. If you’re going with a low perf scenario I don’t think you should bother buying SSD, just remove them from the picture and do 12 SATA 7.2K 4TB. For medium and medium ++ perf using

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Nick Fisk
...@lists.ceph.com] On Behalf Of Florent MONTHEL Sent: 15 December 2014 19:45 To: ceph-users@lists.ceph.com Subject: [ceph-users] Number of SSD for OSD journal Hi, I’m buying several servers to test CEPH and I would like to configure journal on SSD drives (maybe it’s not necessary for all use cases

[ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Florent MONTHEL
Hi, I’m buying several servers to test CEPH and I would like to configure journal on SSD drives (maybe it’s not necessary for all use cases) Could you help me to identify number of SSD I need (SSD are very expensive and GB price business case killer… ) ? I don’t want to experience SSD bottleneck