Re: [ceph-users] migrating cephfs metadata pool from spinning disk to SSD.

2015-08-04 Thread Shane Gibson
Bob, Those numbers would seem to indicate some other problem One of the biggest culprits of that poor performance is often related to network issues. In the last few months, there have been several reported issues of performance, that have turned out to be network. Not all, but most.

Re: [ceph-users] CEPH RBD with ESXi

2015-07-20 Thread Shane Gibson
On 7/20/15, 11:52 AM, "ceph-users on behalf of Campbell, Bill" mailto:ceph-users-boun...@lists.ceph.com> on behalf of bcampb...@axcess-financial.com> wrote: We use VMware with Ceph, however we don't use RBD directly (we have an NFS server which has RBD v

Re: [ceph-users] Dont used fqdns in "monmaptool" and "ceph-mon --mkfs"

2015-07-17 Thread Shane Gibson
On 7/16/15, 9:51 PM, "ceph-users on behalf of Goncalo Borges" wrote: >Once I substituted the fqdn by simply the hostname (without the domain) >it worked. Goncalo, I ran into the same problems too - and ended up bailing on the "ceph-deploy" tools and manually building my clusters ... eventual

Re: [ceph-users] Deadly slow Ceph cluster revisited

2015-07-17 Thread Shane Gibson
David - I'm new to Ceph myself, so can't point out any smoking guns - but your problem "feels" like a network issue. I suggest you check all of your OSD/Mon/Clients network interfaces. Check for errors, check that they are negotiating the same link speed/type with your switches (if you have LLD

Re: [ceph-users] backing Hadoop with Ceph ??

2015-07-16 Thread Shane Gibson
On 7/16/15, 6:55 AM, "Gregory Farnum" wrote: > >Yep! The Hadoop workload is a fairly simple one that is unlikely to >break anything in CephFS. We run a limited set of Hadoop tests on it >every week and provide bindings to set it up; I think the >documentation is a bit lacking here but if you've

Re: [ceph-users] backing Hadoop with Ceph ??

2015-07-15 Thread Shane Gibson
ore palatable? ~~shane [2] http://ceph.com/docs/master/radosgw/s3/ [3] https://wiki.apache.org/hadoop/AmazonS3 On 7/15/15, 9:50 AM, "Somnath Roy" mailto:somnath@sandisk.com>> wrote: Did you try to integrate ceph +rgw+s3 with Hadoop? Sent from my iPhone On Jul 15, 201

[ceph-users] backing Hadoop with Ceph ??

2015-07-15 Thread Shane Gibson
We are in the (very) early stages of considering testing backing Hadoop via Ceph - as opposed to HDFS. I've seen a few very vague references to doing that, but haven't found any concrete info (architecture, configuration recommendations, gotchas, lessons learned, etc...). I did find the ce

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Shane Gibson
Lionel - thanks for the feedback ... inline below ... On 7/2/15, 9:58 AM, "Lionel Bouton" mailto:lionel+c...@bouton.name>> wrote: Ouch. These spinning disks are probably a bottleneck: there are regular advices on this list to use one DC SSD for 4 OSDs. You would probably better off with a ded

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Shane Gibson
On 7/2/15, 9:21 AM, "Nate Curry" mailto:cu...@mosaicatm.com>> wrote: Are you using the 4TB disks for the journal? Nate - yes, at the moment the Journal is on 4 TB 7200 rpm disks as well as the OSDS. It's what I've got for hardware ... sitting around in 60 servers that I could grab. I realiz

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Shane Gibson
e some perfomance test/numbers? Thanks in advance, Best regards, German 2015-07-01 21:16 GMT-03:00 Shane Gibson mailto:shane_gib...@symantec.com>>: It also depends a lot on the size of your cluster ... I have a test cluster I'm standing up right now with 60 nodes - a total of 60

Re: [ceph-users] Ceph Journal Disk Size

2015-07-01 Thread Shane Gibson
It also depends a lot on the size of your cluster ... I have a test cluster I'm standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I lose 4 TB - that's a very small fraction of the data. My replicas are going to be spread out across a lot of spindles, and replicating

Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD

2015-06-26 Thread Shane Gibson
Bruce - I ran in to problems w/ ceph-disk on same version too - then switched to Hammer (0.94) ... that worked for me. I didn't track the issue down. Some reason you are deploying an older version ? On 6/26/15, 2:09 PM, "ceph-users on behalf of Bruce McFarland" mailto:ceph-users-boun...@lists

Re: [ceph-users] Combining MON & OSD Nodes

2015-06-26 Thread Shane Gibson
For a high perf cluster - absolutely agree ... but I would suggest that running the MONs as VMs has it's on performance challenges, to carefully manage as well. If you are on oversubscribed hypervisors, you may end up with the same exact issues with perf impacting the MONs. For a very small non

Re: [ceph-users] Combining MON & OSD Nodes

2015-06-25 Thread Shane Gibson
For a small deployment this might be ok - but as mentioned, mon logging might be an issue. Consider the following: * disk resources for mon logging (maybe dedicate a disk to logging, to avoid disk IO contention for OSDs) * CPU resources, some Filesystem types for OSDs can eat a lot of CPU

Re: [ceph-users] Mounting cephfs from cluster ip ok but fails from external ip

2015-06-23 Thread Shane Gibson
On 6/23/15, 5:09 AM, "ceph-users on behalf of Gregory Farnum" wrote: >Monitors are bound to a particular IP address. Greg - are you saying the MONs are only able to bind to a single IP address? Despite the fact that most daemons in the *nix-sphere can bind to multiple IP addresses? What's th

Re: [ceph-users] osd.1 marked down after no pg stats for ~900seconds

2015-06-21 Thread Shane Gibson
Cristian, I'm not sure off hand what's up - but can you increase the logging levels, then rerun the test: http://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/ See the "Runtime" section for injecting the logging arguments after starting - or change the {cluster}.conf (eg /et

Re: [ceph-users] rbd performance issue - can't find bottleneck

2015-06-19 Thread Shane Gibson
All - I have been following this thread for a bit, and am happy to see how involved, capable, and collaborative that this ceph-users community seems to be. It appears there is a fairly strong amount of domain knowledge around the hardware used by many Ceph deployments, with a lot of "thumbs up" a

[ceph-users] OSD Journal creation ?

2015-06-18 Thread Shane Gibson
All - I am building my first ceph cluster, and doing it "the hard way", manually without the aid of "ceph-deploy". I have successfully built the mon cluster and am now adding OSDs. My main question: How do I prepare the "Journal" prior to the prepare/activate stages of the OSD creation? More

[ceph-users] best Linux distro for Ceph

2015-06-17 Thread Shane Gibson
Ok - I know this post has the potential to spread to unsavory corners of discussion about "the best linux distro" ... blah blah blah ... please, don't let it go there ... ! I'm seeking some input from people that have been running larger Ceph clusters ... on the order of 100s of physical serve

Re: [ceph-users] help to new user

2015-06-15 Thread Shane Gibson
Vida - installing Ceph as hosted VMs is a great way to get experience "hands-on" with a Ceph cluster. It is NOT a good way to run Ceph for any real work load.NOTE that it's critical you structure your virtual disks and virtual network(s) to match how you'd like to run your Ceph work loads

Re: [ceph-users] Is Ceph right for me?

2015-06-11 Thread Shane Gibson
Alternatively you could just use GIT (or some other form of versioning system) ... host your code/files/html/whatever in GIT. Make changes to the GIT tree - then you can trigger a git pull from your webservers to local filesystem. This gives you the ability to use branches/versions to control