Hi,
I test in VM with fio, here is the config:
[global]
direct=1
ioengine=aio
iodepth=1
[sequence read 4K]
rw=read
bs=4K
size=1024m
directory=/mnt
filename=test
sequence read 4K: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio,
iodepth=1
fio-2.1.3
Starting 1 process
sequence read 4K: La
Hi,
During deep-scrub Ceph discovered some inconsistency between OSDs on my
cluster (size 3, min size 2). I have fund broken object and calculated
md5sum of it on each OSD (osd.195 is acting_primary):
osd.195 - md5sum_
osd.40 - md5sum_
osd.314 - md5sum_
I run ceph pg repair and Cep
Hi all:
As it was asked weeks ago.. what is the way the ceph community uses to
stay tuned on new features and bug fixes?
Thanks!
Best,
---
JuanFra Rodriguez Cardoso
es.linkedin.com/in/jfrcardoso/
___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
Hi,
Is anyone help me to resolve the error as follows ? Thank a lot's.
rest-bench --api-host=172.20.10.106 --bucket=test
--access-key=BXXX --secret=z
--protocol=http --uri_style=path --concurrent-ios=3 --block-size=4096 write
host=172.20.10.106
ERROR: failed to c
We are testing a Giant cluster - on virtual machines for now. We have
seen the same
problem two nights in a row: One of the OSDs gets stuck in
uninterruptible sleep.
The only way to get rid of it is apparently to reboot - kill -9, -11 and
-15 have all
been tried.
The monitor apparently believe
Hi everyone,
been trying to get to the bottom of this for a few days; thought I'd
take this to the list to see if someone had insight to share.
Situation: Ceph 0.87 (Giant) cluster with approx. 250 OSDs. One set of
OSD nodes with just spinners put into one CRUSH ruleset assigned to a
"spinner" po
On 11/21/2014 08:14 AM, Florian Haas wrote:
Hi everyone,
been trying to get to the bottom of this for a few days; thought I'd
take this to the list to see if someone had insight to share.
Situation: Ceph 0.87 (Giant) cluster with approx. 250 OSDs. One set of
OSD nodes with just spinners put int
Hello all,
I followed the setup steps provided here:
http://karan-mj.blogspot.com/2014/09/ceph-calamari-survival-guide.html
I was able to build and install everything correctly as far as I can
tell...however I am still not able to get the server to see the cluster.
I am getting the following
Thanks Michael. That was a good idea.
I did:
1. sudo service ceph stop mds
2. ceph mds newfs 1 0 —yes-i-really-mean-it (where 1 and 0 are pool ID’s for
metadata and data)
3. ceph health (It was healthy now!!!)
4. sudo servie ceph start mds.$(hostname -s)
And I am back in business.
Thanks ag
I had to run "salt-call state.highstate” on my ceph nodes.
Also, if you’re running giant you’ll have to make a small change to get your
disk stats to show up correctly.
/opt/calamari/venv/lib/python2.6/site-packages/calamari_rest_api-0.1-py2.6.egg/calamari_rest/views/v1.py
$ diff v1.py v1.py.o
I have started over from scratch a few times myself ;-)
Michael Kuriger
mk7...@yp.com
818-649-7235
MikeKuriger (IM)
From: JIten Shah mailto:jshah2...@me.com>>
Date: Friday, November 21, 2014 at 9:44 AM
To: Michael Kuriger mailto:mk7...@yp.com>>
Cc: Craig Lewis mailto:cle...@centraldesktop.com>>,
On Fri, Nov 21, 2014 at 2:35 AM, Paweł Sadowski wrote:
> Hi,
>
> During deep-scrub Ceph discovered some inconsistency between OSDs on my
> cluster (size 3, min size 2). I have fund broken object and calculated
> md5sum of it on each OSD (osd.195 is acting_primary):
> osd.195 - md5sum_
> osd.
On Fri, Nov 21, 2014 at 4:56 AM, Jon Kåre Hellan
wrote:
> We are testing a Giant cluster - on virtual machines for now. We have seen
> the same
> problem two nights in a row: One of the OSDs gets stuck in uninterruptible
> sleep.
> The only way to get rid of it is apparently to reboot - kill -9, -
On 21/11/14 16:05, Mark Kirkwood wrote:
On 21/11/14 15:52, Mark Kirkwood wrote:
On 21/11/14 14:49, Mark Kirkwood wrote:
The only things that look odd in the destination zone logs are 383
requests getting 404 rather than 200:
$ grep "http_status=404" ceph-client.radosgw.us-west-1.log
...
2014-
W dniu 21.11.2014 o 20:12, Gregory Farnum pisze:
> On Fri, Nov 21, 2014 at 2:35 AM, Paweł Sadowski wrote:
>> Hi,
>>
>> During deep-scrub Ceph discovered some inconsistency between OSDs on my
>> cluster (size 3, min size 2). I have fund broken object and calculated
>> md5sum of it on each OSD (osd.
Michael,
Thanks for the info.
We are running ceph version 0.80.7 so I don't think the 2nd part applies
here.
However when I run the salt command on the ceph nodes it fails:
root@hqceph1:~# salt-call state.highstate
[INFO] Loading fresh modules for state activity
local:
--
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
wrote:
> On 21/11/14 14:49, Mark Kirkwood wrote:
>>
>>
>> The only things that look odd in the destination zone logs are 383
>> requests getting 404 rather than 200:
>>
>> $ grep "http_status=404" ceph-client.radosgw.us-west-1.log
>> ...
>> 2014-11-21
I am trying to setup 3 MDS servers (one on each MON) but after I am done
setting up the first one, it give me below error when I try to start it on the
other ones. I understand that only 1 MDS is functional at a time, but I thought
you can have multiple of them up, incase the first one dies? Or
This got taken care of after I deleted the pools for metadata and data and
started it again.
I did:
1. sudo service ceph stop mds
2. ceph mds newfs 1 0 —yes-i-really-mean-it (where 1 and 0 are pool ID’s for
metadata and data)
3. ceph health (It was healthy now!!!)
4. sudo servie ceph start
19 matches
Mail list logo