Amazing work, will test it as soon as I can!
Thanks
*Marco Garcês*
*#sysadmin*
Maputo - Mozambique
*[Phone]* +258 84 4105579
*[Skype]* marcogarces
On Wed, Sep 3, 2014 at 3:20 AM, David Moreau Simard
wrote:
> Oh nasty typo in those release notes. RDB module :)
>
> Good thing nonetheless !
> --
Well done! Gonna test this :)
On 03 Sep 2014, at 11:24, Marco Garcês wrote:
> Amazing work, will test it as soon as I can!
> Thanks
>
>
> Marco Garcês
> #sysadmin
> Maputo - Mozambique
> [Phone] +258 84 4105579
> [Skype] marcogarces
>
>
> On Wed, Sep 3, 2014 at 3:20 AM, David Moreau Simard
Or Ansible: https://github.com/ceph/ceph-ansible
On 29 Aug 2014, at 20:24, Olivier DELHOMME
wrote:
> Hello,
>
> - Mail original -
>> De: "Chad Seys"
>> À: ceph-users@lists.ceph.com
>> Envoyé: Vendredi 29 Août 2014 18:53:19
>> Objet: [ceph-users] script for commissioning a node with mu
Hi,
Btw, it was not me who added it to the official release.
Em 03/09/2014 07:14, "Sebastien Han" escreveu:
> Well done! Gonna test this :)
>
> On 03 Sep 2014, at 11:24, Marco Garcês wrote:
>
> > Amazing work, will test it as soon as I can!
> > Thanks
> >
> >
> > Marco Garcês
> > #sysadmin
> >
I also can reproduce it on a new slightly different set up (also EC on
KV and Cache) by running ceph pg scrub on a KV pg: this pg will then
get the 'inconsistent' status
- Message from Kenneth Waegeman -
Date: Mon, 01 Sep 2014 16:28:31 +0200
From: Kenneth Waegeman
Subjec
Hi all,
I recently upgraded to firefly and found out that rebuilding an OSD no
longer works the easy way: we used to stop an OSD, wipe clean the data, and
start it again and let it refill. With firefly, the OSD would be just stuck
after restart and do nothing.
It seems that we have to remove and
Hello,
last weeks we observed many misdirected client messages in the logs.
The messages are similar to this one:
2014-09-03 15:20:55.696752 osd.24 192.168.61.3:6830/25216 234 : [WRN]
client.2936377 192.168.61.105:0/983896378 misdirected
client.2936377.1:4985727 pg 0.a7459c63 to osd.24 not [5,
Hi everyone,
I use Ceph Firefly release.
I had a MDS cluster with only one MDS until yesterday, when I tried to add
a second one to test multi-mds. I thought I could get back to one MDS when
I want, but it seems we can't !
Both crashed this night, and I am unable to get them back today.
They ap
Hi Florent,
The first thing to do is to turn up the logging on the MDS (if you
haven't already) -- set "debug mds = 20"
http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/#subsystem-log-and-debug-settings
Since you say they appear as 'active' in "ceph status", I assume they
are runni
I have used Sébastien ansible scripts. They work great :)
On Wed, Sep 3, 2014 at 8:42 AM, Sebastien Han
wrote:
> Or Ansible: https://github.com/ceph/ceph-ansible
>
> On 29 Aug 2014, at 20:24, Olivier DELHOMME <
> olivier.delho...@mines-paristech.fr> wrote:
>
> > Hello,
> >
> > - Mail origi
Hi John and thank you for your answer.
I "solved" the problem doing : ceph mds stop 1
So one MDS is marked as "stopping". A few hours later, it is still
"stopping" (active process, consuming CPU sometimes).
So the other seems to respond fine to clients...
Multi-MDS is really really really unsta
I was trying to install the development version of Ceph (0.84) on a cluster,
using ceph-deploy and trying not to have to copy in repo files and other hacks
onto the mon/OSD nodes. The problem is, it seems to presume it knows the right
URL to install from, and it's not taking the settings from t
On Wed, Sep 3, 2014 at 11:18 AM, LaBarre, James (CTR) A6IT
wrote:
> I was trying to install the development version of Ceph (0.84) on a cluster,
> using ceph-deploy and trying not to have to copy in repo files and other
> hacks onto the mon/OSD nodes. The problem is, it seems to presume it
The clients are sending messages to OSDs which are not the primary for
the data. That shouldn't happen — clients which don't understand the
whole osdmap ought to be gated and prevented from accessing the
cluster at all. What version of Ceph are you running, and what
clients?
(We've seen this in dev
Hi Craig,
I'll try that, thanks for the hint :-)
Cheers
On 03/09/2014 19:53, Craig Lewis wrote:
> The only way I've been able to solve this it to recreate the OSDs that Ceph
> wants to probe. It doesn't have to have anything on it, it's probably better
> if it doesn't. Even ceph osd lost 2 w
ceph osd reweight-by-utilization is ok to use, as long as it's tempory.
I've used it while waiting for new hardware to arrive. It adjusts the
weight displayed in ceph osd tree, but not the weight used in the crushmap.
Yeah, there are two different weights for an OSD. Leave the crushmap
weight a
If you're running ntpd, then I believe your clocks were too skewed for the
authentication to work. Once ntpd got the clocks syncing, authentication
would start working again.
You can query ntpd for how skewed the clock is relative to the NTP servers:
clewis@ceph2:~$ sudo ntpq -p
remote
Hi Warren,
What do mean exactly by secure erase? At the firmware level with constructor
softwares?
SSDs were pretty new so I don’t we hit that sort of things. I believe that only
aged SSDs have this behaviour but I might be wrong.
On 02 Sep 2014, at 18:23, Wang, Warren wrote:
> Hi Sebastien,
Thank's for your reply.
We are experiencing these errors on two clusters.
The clusters are running firefly 0.80.5 on debian wheezy.
The clients are running firefly 0.80.4 on debian wheezy.
On all monitors the parameter:
mon osd allow primary affinity = false
ceph --admin-daemon /var/run/ceph/ce
On Thu, Sep 4, 2014 at 12:18 AM, Maros Vegh wrote:
> Thank's for your reply.
>
> We are experiencing these errors on two clusters.
> The clusters are running firefly 0.80.5 on debian wheezy.
> The clients are running firefly 0.80.4 on debian wheezy.
>
> On all monitors the parameter:
> mon osd all
The ceph fs is mounted via the kernel client.
The clients are running on this kernel:
3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u1 x86_64 GNU/Linux
Maros
On 3. 9. 2014 22:35, Ilya Dryomov wrote:
On Thu, Sep 4, 2014 at 12:18 AM, Maros Vegh wrote:
Thank's for your reply.
We are experiencing the
Le 03/09/2014 22:11, Sebastien Han a écrit :
> Hi Warren,
>
> What do mean exactly by secure erase? At the firmware level with constructor
> softwares?
> SSDs were pretty new so I don’t we hit that sort of things. I believe that
> only aged SSDs have this behaviour but I might be wrong.
I think
Le 03/09/2014 22:11, Sebastien Han a écrit :
> Hi Warren,
>
> What do mean exactly by secure erase? At the firmware level with constructor
> softwares?
> SSDs were pretty new so I don’t we hit that sort of things. I believe that
> only aged SSDs have this behaviour but I might be wrong.
Sorry I
Hello Ladies and Gentlemen;-)
The reason for the problem was the lack of battery backuped cache. After
we had installed it the load is even on all osd's.
Thanks
Pawel
---
Paweł Orzechowski
pawel.orzechow...@budikom.net
___
ceph-users mailing
On 09/03/2014 04:34 PM, pawel.orzechow...@budikom.net wrote:
Hello Ladies and Gentlemen;-)
The reason for the problem was the lack of battery backuped cache. After
we had installed it the load is even on all osd's.
Glad to hear it was that simple! :)
Mark
Thanks
Pawel
---
Paweł Orzechow
Hello guys,
I was wondering if someone could point me in the right direction of a step by
step guide on setting up a cache pool. I've seen the
http://ceph.com/docs/firefly/dev/cache-pool/. However, it has no mentioning of
the first steps that one need to take.
For instance, I've got my ssd di
"monclient: hunting for new mon" happens whenever the monmap changes. It
will hang if there's no quorum.
I haven't done this manually in a long time, so I'll refer to the Chef
recipes. The recipe doesn't do the 'ceph-mon add', it just starts the
daemon up.
Try:
sudo ceph-mon -i gail --mkfs --m
You mix sata and ssd disks within the same server? Read this:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
When you have different pools for sata and ssd configure cache-pool:
ceph osd tier add satapool ssdpool
ceph osd tier cache-mode ssdpool writeback
28 matches
Mail list logo