Re: [ceph-users] docker + coreos + ceph

2014-09-03 Thread Marco Garcês
Amazing work, will test it as soon as I can! Thanks *Marco Garcês* *#sysadmin* Maputo - Mozambique *[Phone]* +258 84 4105579 *[Skype]* marcogarces On Wed, Sep 3, 2014 at 3:20 AM, David Moreau Simard wrote: > Oh nasty typo in those release notes. RDB module :) > > Good thing nonetheless ! > --

Re: [ceph-users] docker + coreos + ceph

2014-09-03 Thread Sebastien Han
Well done! Gonna test this :) On 03 Sep 2014, at 11:24, Marco Garcês wrote: > Amazing work, will test it as soon as I can! > Thanks > > > Marco Garcês > #sysadmin > Maputo - Mozambique > [Phone] +258 84 4105579 > [Skype] marcogarces > > > On Wed, Sep 3, 2014 at 3:20 AM, David Moreau Simard

Re: [ceph-users] script for commissioning a node with multiple osds, added to cluster as a whole

2014-09-03 Thread Sebastien Han
Or Ansible: https://github.com/ceph/ceph-ansible On 29 Aug 2014, at 20:24, Olivier DELHOMME wrote: > Hello, > > - Mail original - >> De: "Chad Seys" >> À: ceph-users@lists.ceph.com >> Envoyé: Vendredi 29 Août 2014 18:53:19 >> Objet: [ceph-users] script for commissioning a node with mu

Re: [ceph-users] docker + coreos + ceph

2014-09-03 Thread Lorieri
Hi, Btw, it was not me who added it to the official release. Em 03/09/2014 07:14, "Sebastien Han" escreveu: > Well done! Gonna test this :) > > On 03 Sep 2014, at 11:24, Marco Garcês wrote: > > > Amazing work, will test it as soon as I can! > > Thanks > > > > > > Marco Garcês > > #sysadmin > >

Re: [ceph-users] ceph cluster inconsistency keyvaluestore

2014-09-03 Thread Kenneth Waegeman
I also can reproduce it on a new slightly different set up (also EC on KV and Cache) by running ceph pg scrub on a KV pg: this pg will then get the 'inconsistent' status - Message from Kenneth Waegeman - Date: Mon, 01 Sep 2014 16:28:31 +0200 From: Kenneth Waegeman Subjec

[ceph-users] Rebuilding OSD in firefly

2014-09-03 Thread Xu (Simon) Chen
Hi all, I recently upgraded to firefly and found out that rebuilding an OSD no longer works the easy way: we used to stop an OSD, wipe clean the data, and start it again and let it refill. With firefly, the OSD would be just stuck after restart and do nothing. It seems that we have to remove and

[ceph-users] Misdirected client messages

2014-09-03 Thread Maros Vegh
Hello, last weeks we observed many misdirected client messages in the logs. The messages are similar to this one: 2014-09-03 15:20:55.696752 osd.24 192.168.61.3:6830/25216 234 : [WRN] client.2936377 192.168.61.105:0/983896378 misdirected client.2936377.1:4985727 pg 0.a7459c63 to osd.24 not [5,

[ceph-users] Need help : MDS cluster completely dead !

2014-09-03 Thread Florent Bautista
Hi everyone, I use Ceph Firefly release. I had a MDS cluster with only one MDS until yesterday, when I tried to add a second one to test multi-mds. I thought I could get back to one MDS when I want, but it seems we can't ! Both crashed this night, and I am unable to get them back today. They ap

Re: [ceph-users] Need help : MDS cluster completely dead !

2014-09-03 Thread John Spray
Hi Florent, The first thing to do is to turn up the logging on the MDS (if you haven't already) -- set "debug mds = 20" http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/#subsystem-log-and-debug-settings Since you say they appear as 'active' in "ceph status", I assume they are runni

Re: [ceph-users] script for commissioning a node with multiple osds, added to cluster as a whole

2014-09-03 Thread Jay Janardhan
I have used Sébastien ansible scripts. They work great :) On Wed, Sep 3, 2014 at 8:42 AM, Sebastien Han wrote: > Or Ansible: https://github.com/ceph/ceph-ansible > > On 29 Aug 2014, at 20:24, Olivier DELHOMME < > olivier.delho...@mines-paristech.fr> wrote: > > > Hello, > > > > - Mail origi

Re: [ceph-users] Need help : MDS cluster completely dead !

2014-09-03 Thread Florent Bautista
Hi John and thank you for your answer. I "solved" the problem doing : ceph mds stop 1 So one MDS is marked as "stopping". A few hours later, it is still "stopping" (active process, consuming CPU sometimes). So the other seems to respond fine to clients... Multi-MDS is really really really unsta

[ceph-users] Install from alternate repo

2014-09-03 Thread LaBarre, James (CTR) A6IT
I was trying to install the development version of Ceph (0.84) on a cluster, using ceph-deploy and trying not to have to copy in repo files and other hacks onto the mon/OSD nodes. The problem is, it seems to presume it knows the right URL to install from, and it's not taking the settings from t

Re: [ceph-users] Install from alternate repo

2014-09-03 Thread Alfredo Deza
On Wed, Sep 3, 2014 at 11:18 AM, LaBarre, James (CTR) A6IT wrote: > I was trying to install the development version of Ceph (0.84) on a cluster, > using ceph-deploy and trying not to have to copy in repo files and other > hacks onto the mon/OSD nodes. The problem is, it seems to presume it

Re: [ceph-users] Misdirected client messages

2014-09-03 Thread Gregory Farnum
The clients are sending messages to OSDs which are not the primary for the data. That shouldn't happen — clients which don't understand the whole osdmap ought to be gated and prevented from accessing the cluster at all. What version of Ceph are you running, and what clients? (We've seen this in dev

Re: [ceph-users] Fixing mark_unfound_lost revert failure

2014-09-03 Thread Loic Dachary
Hi Craig, I'll try that, thanks for the hint :-) Cheers On 03/09/2014 19:53, Craig Lewis wrote: > The only way I've been able to solve this it to recreate the OSDs that Ceph > wants to probe. It doesn't have to have anything on it, it's probably better > if it doesn't. Even ceph osd lost 2 w

Re: [ceph-users] Uneven OSD usage

2014-09-03 Thread Craig Lewis
ceph osd reweight-by-utilization is ok to use, as long as it's tempory. I've used it while waiting for new hardware to arrive. It adjusts the weight displayed in ceph osd tree, but not the weight used in the crushmap. Yeah, there are two different weights for an OSD. Leave the crushmap weight a

Re: [ceph-users] ceph can not repair itself after accidental power down, half of pgs are peering

2014-09-03 Thread Craig Lewis
If you're running ntpd, then I believe your clocks were too skewed for the authentication to work. Once ntpd got the clocks syncing, authentication would start working again. You can query ntpd for how skewed the clock is relative to the NTP servers: clewis@ceph2:~$ sudo ntpq -p remote

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-03 Thread Sebastien Han
Hi Warren, What do mean exactly by secure erase? At the firmware level with constructor softwares? SSDs were pretty new so I don’t we hit that sort of things. I believe that only aged SSDs have this behaviour but I might be wrong. On 02 Sep 2014, at 18:23, Wang, Warren wrote: > Hi Sebastien,

Re: [ceph-users] Misdirected client messages

2014-09-03 Thread Maros Vegh
Thank's for your reply. We are experiencing these errors on two clusters. The clusters are running firefly 0.80.5 on debian wheezy. The clients are running firefly 0.80.4 on debian wheezy. On all monitors the parameter: mon osd allow primary affinity = false ceph --admin-daemon /var/run/ceph/ce

Re: [ceph-users] Misdirected client messages

2014-09-03 Thread Ilya Dryomov
On Thu, Sep 4, 2014 at 12:18 AM, Maros Vegh wrote: > Thank's for your reply. > > We are experiencing these errors on two clusters. > The clusters are running firefly 0.80.5 on debian wheezy. > The clients are running firefly 0.80.4 on debian wheezy. > > On all monitors the parameter: > mon osd all

Re: [ceph-users] Misdirected client messages

2014-09-03 Thread Maros Vegh
The ceph fs is mounted via the kernel client. The clients are running on this kernel: 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u1 x86_64 GNU/Linux Maros On 3. 9. 2014 22:35, Ilya Dryomov wrote: On Thu, Sep 4, 2014 at 12:18 AM, Maros Vegh wrote: Thank's for your reply. We are experiencing the

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-03 Thread Cedric Lemarchand
Le 03/09/2014 22:11, Sebastien Han a écrit : > Hi Warren, > > What do mean exactly by secure erase? At the firmware level with constructor > softwares? > SSDs were pretty new so I don’t we hit that sort of things. I believe that > only aged SSDs have this behaviour but I might be wrong. I think

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-03 Thread Cedric Lemarchand
Le 03/09/2014 22:11, Sebastien Han a écrit : > Hi Warren, > > What do mean exactly by secure erase? At the firmware level with constructor > softwares? > SSDs were pretty new so I don’t we hit that sort of things. I believe that > only aged SSDs have this behaviour but I might be wrong. Sorry I

Re: [ceph-users] Ceph monitor load, low performance

2014-09-03 Thread pawel . orzechowski
Hello Ladies and Gentlemen;-) The reason for the problem was the lack of battery backuped cache. After we had installed it the load is even on all osd's. Thanks Pawel --- Paweł Orzechowski pawel.orzechow...@budikom.net ___ ceph-users mailing

Re: [ceph-users] Ceph monitor load, low performance

2014-09-03 Thread Mark Nelson
On 09/03/2014 04:34 PM, pawel.orzechow...@budikom.net wrote: Hello Ladies and Gentlemen;-) The reason for the problem was the lack of battery backuped cache. After we had installed it the load is even on all osd's. Glad to hear it was that simple! :) Mark Thanks Pawel --- Paweł Orzechow

[ceph-users] Cache pool - step by step guide

2014-09-03 Thread Andrei Mikhailovsky
Hello guys, I was wondering if someone could point me in the right direction of a step by step guide on setting up a cache pool. I've seen the http://ceph.com/docs/firefly/dev/cache-pool/. However, it has no mentioning of the first steps that one need to take. For instance, I've got my ssd di

Re: [ceph-users] I fail to add a monitor in a ceph cluster

2014-09-03 Thread Craig Lewis
"monclient: hunting for new mon" happens whenever the monmap changes. It will hang if there's no quorum. I haven't done this manually in a long time, so I'll refer to the Chef recipes. The recipe doesn't do the 'ceph-mon add', it just starts the daemon up. Try: sudo ceph-mon -i gail --mkfs --m

Re: [ceph-users] Cache pool - step by step guide

2014-09-03 Thread Vladislav Gorbunov
You mix sata and ssd disks within the same server? Read this: http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/ When you have different pools for sata and ssd configure cache-pool: ceph osd tier add satapool ssdpool ceph osd tier cache-mode ssdpool writeback