Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-16 Thread Steffen W Sørensen
> That later change would have _increased_ the number of recommended PG, not > decreased it. Weird as our Giant health status was ok before upgrading to Hammer… > With your cluster 2048 PGs total (all pools combined!) would be the sweet > spot, see: > > http://ceph.com/pgcalc/

Re: [ceph-users] Ceph repo - RSYNC?

2015-04-16 Thread Wido den Hollander
On 15-04-15 18:17, Paul Mansfield wrote: > > Sorry for starting a new thread, I've only just subscribed to the list > and the archive on the mail listserv is far from complete at the moment. > No problem! It's on my radar to come up with a proper mirror system for Ceph. A simple Bash script whi

Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-16 Thread Steffen W Sørensen
On 16/04/2015, at 01.48, Steffen W Sørensen wrote: > > Also our calamari web UI won't authenticate anymore, can’t see any issues in > any log under /var/log/calamari, any hints on what to look for are > appreciated, TIA! Well this morning it will authenticate me, but seems calamari can’t talk t

Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-16 Thread Christian Balzer
On Thu, 16 Apr 2015 10:46:35 +0200 Steffen W Sørensen wrote: > > That later change would have _increased_ the number of recommended PG, > > not decreased it. > Weird as our Giant health status was ok before upgrading to Hammer… > I'm pretty sure the "too many" check was added around then, and the

Re: [ceph-users] Ceph site is very slow

2015-04-16 Thread unixkeeper
it still on DDOS ATTACK? is there have a mirror site could get doc&guide? thx a lot On Wed, Apr 15, 2015 at 11:32 PM, Gregory Farnum wrote: > People are working on it but I understand there was/is a DoS attack going > on. :/ > -Greg > > On Wed, Apr 15, 2015 at 1:50 AM Ignazio Cassano > wrote

Re: [ceph-users] Ceph site is very slow

2015-04-16 Thread Vikhyat Umrao
I hope this will help you : http://docs.ceph.com/docs/master/ Regards, Vikhyat On 04/16/2015 02:39 PM, unixkeeper wrote: it still on DDOS ATTACK? is there have a mirror site could get doc&guide? thx a lot On Wed, Apr 15, 2015 at 11:32 PM, Gregory Farnum > wrote:

Re: [ceph-users] Motherboard recommendation?

2015-04-16 Thread Mohamed Pakkeer
Hi Nick, Thanks Nick for your reply. There is a clear picture on the hardware requirement for replication( 1Ghz per osd). But We cant find any document related to hardware recommendation for erasure coding.I read the mark nelson report. But still some erasure coding testing shows 100% CPU utiliza

Re: [ceph-users] Motherboard recommendation?

2015-04-16 Thread Nick Fisk
Hi Mohamed, I asked Mark the exact same question about his report, on his test hardware he had slightly less than 1GHZ per OSD so he was fairly sure the guideline was still reasonable accurate. However it's hard to come up with an exact figure as the CPU usage will change with the varying K/M

Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-16 Thread Steffen W Sørensen
> On 16/04/2015, at 11.09, Christian Balzer wrote: > > On Thu, 16 Apr 2015 10:46:35 +0200 Steffen W Sørensen wrote: > >>> That later change would have _increased_ the number of recommended PG, >>> not decreased it. >> Weird as our Giant health status was ok before upgrading to Hammer… >> > I'm

Re: [ceph-users] Rados Gateway and keystone

2015-04-16 Thread ghislain.chevalier
Hi, I finally configure a cloudberry profile by setting what seems to be the right endpoint for object storage according to the openstack environment : myrgw:myport/swift/v1 I got a “204 no content” error even if 2 containers were previously created by a swift operation with object into them.

Re: [ceph-users] mds crashing

2015-04-16 Thread Adam Tygart
(Adding back to the list) We've not seen any slow requests near that badly behind. Leading up to the crash, the furthest behind I saw any request was ~90 seconds. Here is the cluster log leading up to the mds crashes. http://people.beocat.cis.ksu.edu/~mozes/ceph-mds-crashes-20150415.log -- Adam

Re: [ceph-users] Ceph repo - RSYNC?

2015-04-16 Thread Paul Mansfield
On 16/04/15 09:55, Wido den Hollander wrote: > It's on my radar to come up with a proper mirror system for Ceph. A > simple Bash script which is in the Git repo which you can use to sync > all Ceph packages and downloads. I've now set up a mirror of ceph/rpm-hammer/rhel7 for our internal use and a

Re: [ceph-users] Ceph repo - RSYNC?

2015-04-16 Thread Wido den Hollander
On 16-04-15 15:11, Paul Mansfield wrote: > On 16/04/15 09:55, Wido den Hollander wrote: >> It's on my radar to come up with a proper mirror system for Ceph. A >> simple Bash script which is in the Git repo which you can use to sync >> all Ceph packages and downloads. > > I've now set up a mirror

[ceph-users] Ceph.com

2015-04-16 Thread Patrick McGarry
Hey cephers, As most of you have no doubt noticed, ceph.com has been having some...er..."issues" lately. Unfortunately this is some of the holdover infrastructure stuff from being a startup without a big-boy ops plan. The current setup has ceph.com sharing a host with some of the nightly build st

Re: [ceph-users] Ceph.com

2015-04-16 Thread Ferber, Dan
Thanks for working on this Patrick. I have looked for a mirror that I can point all the ceph.com references to in /usr/lib/python2.6/site-packages/ceph_deploy/hosts/centos/install.py. So I can get ceph-deploy to work. I tried eu.ceph.com but it does not work for this Dan Ferber Software Defi

Re: [ceph-users] Ceph.com

2015-04-16 Thread Sage Weil
We've fixed it so that 404 handling isn't done by wordpress/php and things are muuuch happier. We've also moved all of the git stuff to git.ceph.com. There is a redirect from http://ceph.com/git to git.ceph.com (tho no https on the new site yet) and a proxy for git://ceph.com. Please let us

Re: [ceph-users] Ceph.com

2015-04-16 Thread Chris Armstrong
Thanks for the update, Patrick. Our Docker builds were failing due to the mirror being down. I appreciate being able to check the mailing list and quickly see what's going on! Chris On Thu, Apr 16, 2015 at 11:28 AM, Patrick McGarry wrote: > Hey cephers, > > As most of you have no doubt noticed,

[ceph-users] switching journal location

2015-04-16 Thread Deneau, Tom
If my cluster is quiet and on one node I want to switch the location of the journal from the default location to a file on an SSD drive (or vice versa), what is the quickest way to do that? Can I make a soft link to the new location and do it without restarting the OSDs? -- Tom Deneau, AMD ___

Re: [ceph-users] switching journal location

2015-04-16 Thread LOPEZ Jean-Charles
Hi Tom, you will have to stop the OSD, flush the existing journal to ensure data consistency at the OSD level and then switch over to the new journal location (initialise journal then start the OSD). Visit this link for step by step from Sébastien : http://lists.ceph.com/pipermail/ceph-users-c

Re: [ceph-users] ceph on Debian Jessie stopped working

2015-04-16 Thread Gregory Farnum
On Wed, Apr 15, 2015 at 9:31 AM, Chad William Seys wrote: > Hi All, > Earlier ceph on Debian Jessie was working. Jessie is running 3.16.7 . > > Now when I modprobe rbd , no /dev/rbd appear. > > # dmesg | grep -e rbd -e ceph > [ 15.814423] Key type ceph registered > [ 15.814461] libceph: loade

Re: [ceph-users] OSDs not coming up on one host

2015-04-16 Thread Gregory Farnum
The monitor looks like it's not generating a new OSDMap including the booting OSDs. I could say with more certainty what's going on with the monitor log file, but I'm betting you've got one of the noin or noup family of flags set. I *think* these will be output in "ceph -w" or in "ceph osd dump", a

Re: [ceph-users] ceph-osd failure following 0.92 -> 0.94 upgrade

2015-04-16 Thread Gregory Farnum
Yeah, you're going to have to modify one of the journal readers. I don't remember exactly what the issue is so I don't know which direction would be easiest — but you (unlike the Ceph upstream) can have high confidence that the encoding bug is caused by v0.92 rather than some other bug or hardware

Re: [ceph-users] Getting placement groups to place evenly (again)

2015-04-16 Thread Gregory Farnum
On Sat, Apr 11, 2015 at 12:11 PM, J David wrote: > On Thu, Apr 9, 2015 at 7:20 PM, Gregory Farnum wrote: >> Okay, but 118/85 = 1.38. You say you're seeing variance from 53% >> utilization to 96%, and 53%*1.38 = 73.5%, which is *way* off your >> numbers. > > 53% to 96% is with all weights set to d

Re: [ceph-users] metadata management in case of ceph object storage and ceph block storage

2015-04-16 Thread Josef Johansson
Hi, Maybe others had your mail going into junk as well, but that is why at least I did not see it. To your question, which I’m not sure I understand completely. In Ceph you have three distinct types of services, Mon, Monitors MDS, Metadata Servers OSD, Object Storage Devices And some other c

[ceph-users] Cache-tier problem when cache becomes full

2015-04-16 Thread Xavier Serrano
Hello all, We are trying to run some tests on a cache-tier Ceph cluster, but we are encountering serious problems, which eventually lead the cluster unusable. We are apparently doing something wrong, but we have no idea of what it could be. We'd really appreciate if someone could point us what to

Re: [ceph-users] Cache-tier problem when cache becomes full

2015-04-16 Thread LOPEZ Jean-Charles
Hi Xavier see comments inline JC > On 16 Apr 2015, at 23:02, Xavier Serrano wrote: > > Hello all, > > We are trying to run some tests on a cache-tier Ceph cluster, but > we are encountering serious problems, which eventually lead the cluster > unusable. > > We are apparently doing something