Re: [ceph-users] Ceph Journal Disk Size

2015-07-08 Thread Quentin Hartman
I don't see it as being any worse than having multiple journals on a single drive. If your journal drive tanks, you're out X OSDs as well. It's arguably better, since the number of affected OSDs per drive failure is lower. Admittedly, neither deployment is ideal, but it an effective way to get from

Re: [ceph-users] Ceph Journal Disk Size

2015-07-08 Thread Mark Nelson
The biggest thing to be careful of with this kind of deployment is that now a single drive failure will take out 2 OSDs instead of 1 which means OSD failure rates and associated recovery traffic go up. I'm not sure that's worth the trade-off... Mark On 07/08/2015 11:01 AM, Quentin Hartman wr

Re: [ceph-users] Ceph Journal Disk Size

2015-07-08 Thread Quentin Hartman
Regarding using spinning disks for journals, before I was able to put SSDs in my deployment I came up wit ha somewhat novel journal setup that gave my cluster way more life than having all the journals on a single disk, or having the journal on the disk with the OSD. I called it "interleaved journa

Re: [ceph-users] Ceph Journal Disk Size

2015-07-03 Thread Van Leeuwen, Robert
> Another issue is performance : you'll get 4x more IOPS with 4 x 2TB drives > than with one single 8TB. > So if you have a performance target your money might be better spent on > smaller drives Regardless of the discussion if it is smart to have very large spinners: Be aware that some of the b

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Lionel Bouton
On 07/02/15 19:13, Shane Gibson wrote: > > Lionel - thanks for the feedback ... inline below ... > > On 7/2/15, 9:58 AM, "Lionel Bouton" > wrote: > > > Ouch. These spinning disks are probably a bottleneck: there are > regular advices on this list to use one

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Shane Gibson
Lionel - thanks for the feedback ... inline below ... On 7/2/15, 9:58 AM, "Lionel Bouton" mailto:lionel+c...@bouton.name>> wrote: Ouch. These spinning disks are probably a bottleneck: there are regular advices on this list to use one DC SSD for 4 OSDs. You would probably better off with a ded

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Lionel Bouton
On 07/02/15 18:27, Shane Gibson wrote: > > On 7/2/15, 9:21 AM, "Nate Curry" > wrote: > > Are you using the 4TB disks for the journal? > > > Nate - yes, at the moment the Journal is on 4 TB 7200 rpm disks as > well as the OSDS. It's what I've got for hardware ... si

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Shane Gibson
On 7/2/15, 9:21 AM, "Nate Curry" mailto:cu...@mosaicatm.com>> wrote: Are you using the 4TB disks for the journal? Nate - yes, at the moment the Journal is on 4 TB 7200 rpm disks as well as the OSDS. It's what I've got for hardware ... sitting around in 60 servers that I could grab. I realiz

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Nate Curry
Are you using the 4TB disks for the journal? *Nate Curry* IT Manager ISSM *Mosaic ATM* mobile: 240.285.7341 office: 571.223.7036 x226 cu...@mosaicatm.com On Thu, Jul 2, 2015 at 12:16 PM, Shane Gibson wrote: > > I'd def be happy to share what numbers I can get out of it. I'm still a > neophyte

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Shane Gibson
I'd def be happy to share what numbers I can get out of it. I'm still a neophyte w/ Ceph, and learning how to operate it, set it up ... etc... My limited performance testing to date has been with "stock" XFS ceph-disk built filesystem for the OSDs, basic PG/CRUSH map stuff - and using "dd" acr

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
I'm interested in such a configuration, can you share some perfomance test/numbers? Thanks in advance, Best regards, *German* 2015-07-01 21:16 GMT-03:00 Shane Gibson : > > It also depends a lot on the size of your cluster ... I have a test > cluster I'm standing up right now with 60 nodes - a

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread Shane Gibson
It also depends a lot on the size of your cluster ... I have a test cluster I'm standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I lose 4 TB - that's a very small fraction of the data. My replicas are going to be spread out across a lot of spindles, and replicating

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
ask the other guys on the list, but for me to lose 4TB of data is to much, the cluster will still running fine, but in some point you need to recover that disk, and also if you lose one server with all the 4TB disk in that case yeah it will hurt the cluster, also take into account that with that ki

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread Quentin Hartman
Like most disk redundancy systems, the concern usually is the amount of time it takes to recover, wherein you are vulnerable to another failure. I would assume that is also the concern here. On Wed, Jul 1, 2015 at 5:54 PM, Nate Curry wrote: > 4TB is too much to lose? Why would it matter if you

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread Nate Curry
4TB is too much to lose? Why would it matter if you lost one 4TB with the redundancy? Won't it auto recover from the disk failure? Nate Curry On Jul 1, 2015 6:12 PM, "German Anders" wrote: > I would probably go with less size osd disks, 4TB is to much to loss in > case of a broken disk, so may

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread German Anders
I would probably go with less size osd disks, 4TB is to much to loss in case of a broken disk, so maybe more osd daemons with less size, maybe 1TB or 2TB size. 4:1 relationship is good enough, also i think that 200G disk for the journals would be ok, so you can save some money there, the osd's of c