Le lundi 20 mai 2013 à 00:06 +0200, Olivier Bonvalet a écrit :
> Le mardi 07 mai 2013 à 15:51 +0300, Dzianis Kahanovich a écrit :
> > I have 4 scrub errors (3 PGs - "found clone without head"), on one OSD. Not
> > repairing. How to repair it exclude re-creating of OSD?
> >
> > Now it "easy" to cl
Olivier Bonvalet пишет:
>
> Le lundi 20 mai 2013 à 00:06 +0200, Olivier Bonvalet a écrit :
>> Le mardi 07 mai 2013 à 15:51 +0300, Dzianis Kahanovich a écrit :
>>> I have 4 scrub errors (3 PGs - "found clone without head"), on one OSD. Not
>>> repairing. How to repair it exclude re-creating of OSD?
Hi,
i'm at the step ADD/REMOVE OSDS.
I want to use '--fs-type btrfs' but every single command says:
'ceph-deploy: error: unrecognized arguments: --fstype btrfs'
If i use it without this parameter an ufs-disk is created.
What i'm doing wrong ?
--
MfG,
Markus Goldberg
---
I have been reading the architecture section of ceph document. One thing has
not been clear to me is how the data HA works when we encounter OSD or server
failure. Does the Crush algorithm recalculate based on the new cluster map and
point the data to the 2nd or 3rd replica for existing data blo
It must be FAQ or something simple, but so far I cannot find how to change
the file system size of the cephfs mount. i have over 1TB
However I could not find any parameter either on the server [mds] section
or on the client mount to increase it to say 100GB.
I looked here for server configuration
Can you post your ceph.log with the period including all of these errors?
-Sam
On Wed, May 22, 2013 at 5:39 AM, Dzianis Kahanovich
wrote:
> Olivier Bonvalet пишет:
>>
>> Le lundi 20 mai 2013 à 00:06 +0200, Olivier Bonvalet a écrit :
>>> Le mardi 07 mai 2013 à 15:51 +0300, Dzianis Kahanovich a écr
Is it enough ?
# tail -n500 -f /var/log/ceph/osd.28.log | grep -A5 -B5 'found clone without
head'
2013-05-22 15:43:09.308352 7f707dd64700 0 log [INF] : 9.105 scrub ok
2013-05-22 15:44:21.054893 7f707dd64700 0 log [INF] : 9.451 scrub ok
2013-05-22 15:44:52.898784 7f707cd62700 0 log [INF] : 9.78
What version are you running?
-Sam
On Wed, May 22, 2013 at 11:25 AM, Olivier Bonvalet wrote:
> Is it enough ?
>
> # tail -n500 -f /var/log/ceph/osd.28.log | grep -A5 -B5 'found clone without
> head'
> 2013-05-22 15:43:09.308352 7f707dd64700 0 log [INF] : 9.105 scrub ok
> 2013-05-22 15:44:21.054
0.61-11-g3b94f03 (0.61-1.1), but the bug occured with bobtail.
Le mercredi 22 mai 2013 à 12:00 -0700, Samuel Just a écrit :
> What version are you running?
> -Sam
>
> On Wed, May 22, 2013 at 11:25 AM, Olivier Bonvalet
> wrote:
> > Is it enough ?
> >
> > # tail -n500 -f /var/log/ceph/osd.28.log
You need to find out where the third copy is. Corrupt it. Then let repair
copy the data from a good copy.
$ ceph pg map 19.1b
You should see something like this:
osdmap e158 pg 19.1b (19.1b) -> up [13, 22, xx] acting [13, 22, xx]
The osd xx that is NOT 13 or 22 has the corrupted copy.Con
rb.0.15c26.238e1f29
Has that rbd volume been removed?
-Sam
On Wed, May 22, 2013 at 12:18 PM, Olivier Bonvalet wrote:
> 0.61-11-g3b94f03 (0.61-1.1), but the bug occured with bobtail.
>
>
> Le mercredi 22 mai 2013 à 12:00 -0700, Samuel Just a écrit :
>> What version are you running?
>> -Sam
>>
>>
Hello,
I just started using ceph recently and was trying to get the RADOS Gateway
working in order to use the Swift compatible API. I followed the install
instructions found here
(http://ceph.com/docs/master/start/quick-ceph-deploy/) and got to a
point where "ceph health" give me
HEALTH_OK. This i
Daniel,
It looks like I need to update that portion of the docs too, as it
links back to the 5-minute quick start. Once you are up and running
with "HEALTH OK" on either the 5-minute Quick Start or Quick Ceph
Deploy, your storage cluster is running fine. The remaining issues
would likely be with a
Hi, Sage,
Any news about this issue ? We are keeping reproducing it again and
again.
We plug out the disk and carefully test the raw disk performance, it's
pretty normal.
Dear all,
now i faced one issue in ceph block device: performance degradation
ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)
ceph status
health HEALTH_OK
monmap e1: 2 mons at {a=49.213.67.204:6789/0,b=49.213.67.203:6789/0}, election
epoch 20, quorum 0,1 a,b
osdmap e53: 2 osds: 2
What's the benchmark?
-Greg
On Wednesday, May 22, 2013, Khanh. Nguyen Dang Quoc wrote:
> Dear all,
>
> ** **
>
> now i faced one issue in ceph block device: performance degradation
>
> ** **
>
> ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)
>
> ceph status
>
> *
Hi Greg,
It's the write benchmark..
Regards
Khanh
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Thursday, May 23, 2013 10:56 AM
To: Khanh. Nguyen Dang Quoc
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] performance degradation issue
What's the benchmark?
-Greg
On Wednesday, Ma
"rados bench write", you mean? Or something else?
Have you checkd the disk performance of each OSD outside of Ceph? In moving
from one to two OSDs your performance isn't actually going to go up because
you're replicating all the data. It ought to stay flat rather than
dropping, but my guess is you
I do test follow the below steps:
+ create image with size 100Gb in pool data
+ after that, I do map that image on one server.
+ and do mkfs.xfs /dev/rbd0 -> mount /deve/rbd0 /mnt
+ I do the write benchmark on that mount point with dd tool :
dd if=/dev/zero of=/mnt/good2 bs=1M count=1 oflag=
Yeah, you need to check your disks individually and see how they compare.
Sounds like the second one is slower. And you're also getting a bit slower
going to 2x replication.
-Greg
On Wednesday, May 22, 2013, Khanh. Nguyen Dang Quoc wrote:
> I do test follow the below steps:
>
> ** **
>
> + c
Yes sure I did check with each osd and compare them.
Now, the replicated size is 2. But I change to size is 1, seem don't improve
more :(
Can you help me check again in my config file ? I don't know what's wrong
exist in there
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Thursday, May
I attempted to upgrade my bobtail cluster to cuttlefish tonight and I
believe I'm running into some mon related issues. I did the original
install manually instead of with mkcephfs or ceph-deploy, so I think
that might have to do with this error:
root@a1:~# ceph-mon -d -c /etc/ceph/ceph.conf
2013
22 matches
Mail list logo