Re: [ceph-users] Number of SSD for OSD journal

2014-12-16 Thread Christian Balzer
On Tue, 16 Dec 2014 12:10:42 +0300 Mike wrote: > 16.12.2014 10:53, Daniel Schwager пишет: > > Hallo Mike, > > > >> This is also have another way. > >> * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to > >> each node. > >> * make tier1 read-write cache on SSDs > >> * also you ca

Re: [ceph-users] Number of SSD for OSD journal

2014-12-16 Thread Mike
16.12.2014 10:53, Daniel Schwager пишет: > Hallo Mike, > >> This is also have another way. >> * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to >> each node. >> * make tier1 read-write cache on SSDs >> * also you can add journal partition on them if you wish - then data >> will

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Daniel Schwager
Hallo Mike, > This is also have another way. > * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to > each node. > * make tier1 read-write cache on SSDs > * also you can add journal partition on them if you wish - then data > will moving from SSD to SSD before let down on HDD > * o

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Mike
15.12.2014 23:45, Sebastien Han пишет: > Salut, > > The general recommended ratio (for me at least) is 3 journals per SSD. Using > 200GB Intel DC S3700 is great. > If you’re going with a low perf scenario I don’t think you should bother > buying SSD, just remove them from the picture and do 12 S

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Christian Balzer
Hello, On Mon, 15 Dec 2014 22:43:14 +0100 Florent MONTHEL wrote: > Thanks all > > I will probably have 2x10gb : 1x10gb for client and 1x10gb for cluster > but I take in charge your recommendation Sebastien > > The 200GB SSD will probably give me around 500MB/s sequential bandwidth. Intel DC S3

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Craig Lewis
I was going with a low perf scenario, and I still ended up adding SSDs. Everything was fine in my 3 node cluster, until I wanted to add more nodes. Admittedly, I was a bit aggressive with the expansion. I added a whole node at once, rather than one or two disks at a time. Still, I wasn't expect

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Florent MONTHEL
Thanks all I will probably have 2x10gb : 1x10gb for client and 1x10gb for cluster but I take in charge your recommendation Sebastien The 200GB SSD will probably give me around 500MB/s sequential bandwidth. So with only 2 SSD I can overload 1x 10gb network. Hum I will take care of osd density

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Sebastien Han
Salut, The general recommended ratio (for me at least) is 3 journals per SSD. Using 200GB Intel DC S3700 is great. If you’re going with a low perf scenario I don’t think you should bother buying SSD, just remove them from the picture and do 12 SATA 7.2K 4TB. For medium and medium ++ perf using

Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Nick Fisk
Hi Florent, Journals don’t need to be very big, 5-10GB per OSD would normally be ample. The key is that you get a SSD with high write endurance, this makes the Intel S3700 drives perfect for journal use. In terms of how many OSD’s you can run per SSD, depends purely on how important perf