On 21-10-15 15:30, Mark Nelson wrote:
>
>
> On 10/21/2015 01:59 AM, Wido den Hollander wrote:
>> On 10/20/2015 07:44 PM, Mark Nelson wrote:
>>> On 10/20/2015 09:00 AM, Wido den Hollander wrote:
Hi,
In the "newstore direction" thread on ceph-devel I wrote that I'm using
bcach
On 10/27/2015 06:21 PM, Ken Dreyer wrote:
> Thanks, I've deleted it from the download.ceph.com web server.
Thanks a lot!
My mirror is up-to-date now.
Björn Lässig
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sorry for raising this topic from the dead, but i'm having the same
issues with NFS-GANESHA /w the wrong user/group information.
Do you maybe have a working ganesha.conf? I'm assuming I might
mis-configured something in this file. It's also nice to ha
HI, Experts and Supporters
I am newer for CEPH so maybe the the question looks simple and stupids sometimes
i want to create the Ceph Cluster with the algorithm of Reed solomn Raid 6,
Jerasure has the plugin "reed_sol_r6_op"
but it seems i can't bind the pool with the OSDs
the steps:
[co
Hi,
During a repo sync, I got:
Package ceph-debuginfo-0.94.5-0.el7.centos.x86_64.rpm is not signed
Indeed:
# rpm -K
http://download.ceph.com/rpm-hammer/el7/x86_64/ceph-debuginfo-0.94.5-0.el7.centos.x86_64.rpm
http://download.ceph.com/rpm-hammer/el7/x86_64/ceph-debuginfo-0.94.5-0.el7.centos.x
Hi, all
As my understand, command "ceph daemon osd.x perf dump objecters" should
output the perf data of osdc(librados). But when i use this command,
why all those values are zero expcept map_epoch and map_inc. Follow is
the result(It has fio test with rbd ioengine on the cluster):
$ sudo ceph d
Hi all,
I'm trying to get the real disk usage of a Cinder volume converting this
bash commands to python:
http://cephnotes.ksperis.com/blog/2013/08/28/rbd-image-real-size
I wrote a small test function which has already worked in many cases but it
stops with a core dump while trying to calculate t
On the RBD performance issue, you may want to look at:
http://tracker.ceph.com/issues/9192
Eric
On Tue, Oct 27, 2015 at 8:59 PM, FaHui Lin wrote:
> Dear Ceph experts,
>
> I found something strange about the performance of my Ceph cluster: Read-out
> much slower than write-in.
>
> I have 3 machin
Hi All,
I am testing a 5 node, 4+1 EC cluster using some simple python code
https://gist.github.com/brynmathias/03c60569499dbf3f6be4
when I run this from an external machine one of my 5 nodes experiences very
high cpu usage (3-400%) per osd
and the others show very low usage.
see here: http://i
Hi Dennis,
We're using NFS Ganesha here as well. I can send you my configuration which is
working but we squash users and groups down to a particular uid/gid, so it may
not be super helpful for you.
I think files not being immediately visible is working as intended, due to
directory caching. I
Could you try with this kernel and bump the readahead on the RBD device up
to at least 32MB?
http://gitbuilder.ceph.com/kernel-deb-precise-x86_64-basic/ref/ra-bring-back
/
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Eric Eastman
> Sen
Hi,
On 10/26/2015 01:43 PM, Yan, Zheng wrote:
On Thu, Oct 22, 2015 at 2:55 PM, Burkhard Linke
wrote:
Hi,
On 10/22/2015 02:54 AM, Gregory Farnum wrote:
On Sun, Oct 18, 2015 at 8:27 PM, Yan, Zheng wrote:
On Sat, Oct 17, 2015 at 1:42 AM, Burkhard Linke
wrote:
Hi,
I've noticed that CephFS
Hi,
On 10/28/2015 03:08 PM, Dennis Kramer (DT) wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sorry for raising this topic from the dead, but i'm having the same
issues with NFS-GANESHA /w the wrong user/group information.
Do you maybe have a working ganesha.conf? I'm assuming I might
mi
> Thanks for your reply, why not rebuild object-map when object-map feature is
> enabled.
>
> Cheers,
> xinxin
>
My initial motivation was to avoid a potentially lengthy rebuild when enabling
the feature. Perhaps that option could warn you to rebuild the object map
after its been enabled.
-
[ Removed ceph-devel ]
On Wednesday, October 28, 2015, Libin Wu wrote:
> Hi, all
>
> As my understand, command "ceph daemon osd.x perf dump objecters" should
> output the perf data of osdc(librados). But when i use this command,
> why all those values are zero expcept map_epoch and map_inc. Foll
Objecter is the client side, but you're dumping stats on the osd. The
only time it is used as a client there is with cache tiering.
sage
On Wed, 28 Oct 2015, Libin Wu wrote:
> Hi, all
>
> As my understand, command "ceph daemon osd.x perf dump objecters" should
> output the perf data of osdc(l
Sure, thanks!
But i think client like qemu use librbd will also run into this perf
statistics, any way
we could dump the perf statistics for other client side like qemu.
Sorry if there exist some document on internet and i didn't got it by searching.
Thanks!
--
Hello,
$ ceph osd stat
osdmap e18: 2 osds: 2 up, 2 in
this is what it shows.
does it mean I need to add up to 3 osds? I just use the default setup.
thx.
On 2015/10/28 星期三 19:53, Gurjar, Unmesh wrote:
Are all the OSDs being reported as 'up' and 'in'? This can be checked by
executing 'c
On 29 October 2015 at 10:29, Wah Peng wrote:
> $ ceph osd stat
> osdmap e18: 2 osds: 2 up, 2 in
>
> this is what it shows.
> does it mean I need to add up to 3 osds? I just use the default setup.
>
If you went with the defaults then your pool size will be 3, meaning it
needs 3 copies of th
Hello,
Just did it, but still no good health. can you help? thanks.
ceph@ceph:~/my-cluster$ ceph osd stat
osdmap e24: 3 osds: 3 up, 3 in
ceph@ceph:~/my-cluster$ ceph health
HEALTH_WARN 89 pgs degraded; 67 pgs incomplete; 67 pgs stuck inactive;
192 pgs stuck unclean
On 2015/10/29 星期四 8:
Please paste 'ceph osd tree'.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 6:54 PM, "Wah Peng" wrote:
> Hello,
>
> Just did it, but still no good health. can you help? thanks.
>
> ceph@ceph:~/my-cluster$ ceph osd stat
> osdmap e24: 3 osds: 3 up, 3 in
>
$ ceph osd tree
# idweight type name up/down reweight
-1 0.24root default
-2 0.24host ceph2
0 0.07999 osd.0 up 1
1 0.07999 osd.1 up 1
2 0.07999 osd.2 up 1
On 2015/10/29 星期四
You need to change the CRUSH map to select osd instead of host.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 7:00 PM, "Wah Peng" wrote:
> $ ceph osd tree
> # idweight type name up/down reweight
> -1 0.24root default
> -2 0.24
wow this sounds hard to me. can you show the details?
thanks a lot.
On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:
You need to change the CRUSH map to select osd instead of host.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 7:00 PM, "Wah Peng" mailto:wah_p
Try " osd crush chooseleaf type = 0" in /etc/ceph/.conf
Regards,
CY.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wah
Peng
Sent: 2015年10月29日 9:14
To: Robert LeBlanc
Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@lists.ceph.com
Subject: R
Is there a way to benchmark individual OSD's?
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 29 October 2015 at 11:39, Lindsay Mathieson
wrote:
> Is there a way to benchmark individual OSD's?
nb - Non-destructive :)
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is there a ceph sub-command existing instead of changing the config file? :)
On 2015/10/29 星期四 9:24, Li, Chengyuan wrote:
Try " osd crush chooseleaf type = 0" in /etc/ceph/.conf
Regards,
CY.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of W
I still see rsync errors due to permissions on the remote side:
rsync: send_files failed to open
"/rpm-dumpling/rhel6/x86_64/ceph-debuginfo-0.67.8-0.el6.x86_64.rpm.iVHKKi" (in
ceph): Permission denied (13)
rsync: send_files failed to open
"/rpm-firefly/rhel6/x86_64/ceph-debuginfo-0.80.1-0.el6
A Google search should have lead you the rest of the way.
Follow this [1] and in the rule section on step choose leaf change host to
osd. You won't need to change the configuration this way, it is saved in
the CRUSH map.
[1]
http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-c
Hello,
this shows the content of crush-map file, what content should I change
for selecting osd instead of host? thanks in advance.
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw
I have had this issue before, and I don't think I have resolved it. I
have been using the RGW admin api to set quota based on the docs[0].
But I can't seem to be able to get it to cough up and show me the quota
now. Any ideas I get a 200 back but no body, I have tested this on a
Firefly (0.80.5-9
On Thu, Oct 29, 2015 at 1:10 AM, Burkhard Linke
wrote:
> Hi,
>
>
> On 10/26/2015 01:43 PM, Yan, Zheng wrote:
>>
>> On Thu, Oct 22, 2015 at 2:55 PM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
>>>
>>> On 10/22/2015 02:54 AM, Gregory Farnum wrote:
On Sun, Oct 18, 2015 at 8:27 PM, Yan, Zheng
On Wed, Oct 28, 2015 at 8:38 PM, Yan, Zheng wrote:
> On Thu, Oct 29, 2015 at 1:10 AM, Burkhard Linke
>> I tried to dig into the ceph-fuse code, but I was unable to find the
>> fragment that is responsible for flushing the data from the page cache.
>>
>
> fuse kernel code invalidates page cache on
I have changed this line,
step chooseleaf firstn 0 type osd
the type from "host" to "osd".
Now the health looks fine:
$ ceph health
HEALTH_OK
Thanks for all the helps.
On 2015/10/29 星期四 10:35, Wah Peng wrote:
Hello,
this shows the content of crush-map file, what content should I change
fo
35 matches
Mail list logo