> Op 13 juli 2016 om 11:51 schreef Ashley Merrick <ash...@amerrick.co.uk>:
> 
> 
> Okie perfect.
> 
> May sound a random question, but what size would you recommend for the 
> SATA-DOM, obviously I know standard OS space requirements, but will CEPH 
> required much on the root OS of a OSD only node apart from standard logs.
> 

32GB should be sufficient, even with 16GB you should be OK. But I'd go with 
32GB so you have enough space when you need to.

Wido

> ,Ashley
> 
> -----Original Message-----
> From: Wido den Hollander [mailto:w...@42on.com] 
> Sent: 13 July 2016 10:44
> To: Ashley Merrick <ash...@amerrick.co.uk>; ceph-users@lists.ceph.com; 
> Christian Balzer <ch...@gol.com>
> Subject: RE: [ceph-users] SSD Journal
> 
> 
> > Op 13 juli 2016 om 11:34 schreef Ashley Merrick <ash...@amerrick.co.uk>:
> > 
> > 
> > Hello,
> > 
> > Looking at using 2 x 960GB SSD's (SM863)
> > 
> > Reason for larger is I was thinking would be better off with them in Raid 1 
> > so enough space for OS and all Journals.
> > 
> > Instead am I better off using 2 x 200GB S3700's instead, with 5 disks per a 
> > SSD?
> > 
> 
> Both the Samsung SM and Intel DC (3510/3710) SSDs are good. If you can, put 
> the OS on it's own device. Maybe a SATA-DOM for example?
> 
> Wido
> 
> > Thanks,
> > Ashley
> > 
> > -----Original Message-----
> > From: Christian Balzer [mailto:ch...@gol.com] 
> > Sent: 13 July 2016 01:12
> > To: ceph-users@lists.ceph.com
> > Cc: Wido den Hollander <w...@42on.com>; Ashley Merrick 
> > <ash...@amerrick.co.uk>
> > Subject: Re: [ceph-users] SSD Journal
> > 
> > 
> > Hello,
> > 
> > On Tue, 12 Jul 2016 19:14:14 +0200 (CEST) Wido den Hollander wrote:
> > 
> > > 
> > > > Op 12 juli 2016 om 15:31 schreef Ashley Merrick <ash...@amerrick.co.uk>:
> > > > 
> > > > 
> > > > Hello,
> > > > 
> > > > Looking at final stages of planning / setup for a CEPH Cluster.
> > > > 
> > > > Per a Storage node looking @
> > > > 
> > > > 2 x SSD OS / Journal
> > > > 10 x SATA Disk
> > > > 
> > > > Will have a small Raid 1 Partition for the OS, however not sure if best 
> > > > to do:
> > > > 
> > > > 5 x Journal Per a SSD
> > > 
> > > Best solution. Will give you the most performance for the OSDs. RAID-1 
> > > will just burn through cycles on the SSDs.
> > > 
> > > SSDs don't fail that often.
> > >
> > What Wido wrote, but let us know what SSDs you're planning to use.
> > 
> > Because the detailed version of that sentence should read: 
> > "Well known and tested DC level SSDs whose size/endurance levels are 
> > matched to the workload rarely fail, especially unexpected."
> >  
> > > Wido
> > > 
> > > > 10 x Journal on Raid 1 of two SSD's
> > > > 
> > > > Is the "Performance" increase from splitting 5 Journal's on each SSD 
> > > > worth the "issue" caused when one SSD goes down?
> > > > 
> > As always, assume at least a node being the failure domain you need to be 
> > able to handle.
> > 
> > Christian
> > 
> > > > Thanks,
> > > > Ashley
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > 
> > 
> > 
> > -- 
> > Christian Balzer        Network/Systems Engineer                
> > ch...@gol.com       Global OnLine Japan/Rakuten Communications
> > http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to