[ceph-users] About cephfs with hadoop

2015-09-23 Thread Fulin Sun
Hi, all I am trying to use cephfs as a replacement drop-in for hadoop hdfs. Maily configure steps according to doc here : http://docs.ceph.com/docs/master/cephfs/hadoop/ I am using a 3 node hadoop 2.7.1 cluster. Noting the official doc recommend using 1.1.x stable release, I am not sure if usi

[ceph-users] Diffrent OSD capacity & what is the weight of item

2015-09-23 Thread wikison
Hi, I have four storage machines to build a ceph storage cluster as storage nodes. Each of them is attached a 120 GB HDD and a 1 TB HDD. Is it OK to think that those storage devices are same when write a ceph.conf? For example, when setting osd pool default pg num , I thought: os

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-23 Thread Paul Mansfield
On 22/09/15 19:48, Jason Dillaman wrote: >> On 22/09/15 17:46, Jason Dillaman wrote: >>> As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph >>> project only due to the fact that EPEL 7 doesn't provide the required >>> packages [1]. >> >> interesting. so basically our program m

Re: [ceph-users] IPv6 connectivity after website changes

2015-09-23 Thread Wido den Hollander
On 23-09-15 03:49, Dan Mick wrote: > On 09/22/2015 05:22 AM, Sage Weil wrote: >> On Tue, 22 Sep 2015, Wido den Hollander wrote: >>> Hi, >>> >>> After the recent changes in the Ceph website the IPv6 connectivity got lost. >>> >>> www.ceph.com >>> docs.ceph.com >>> download.ceph.com >>> git.ceph.co

Re: [ceph-users] C example of using libradosstriper?

2015-09-23 Thread Paul Mansfield
Hi, thanks very much for posting that, much appreciated. We were able to build and test it on Red Hat EL7. On 17/09/15 04:01, 张冬卯 wrote: > > Hi, > > src/tools/rados.c has some striper rados snippet. > > and I have this little project using striper rados. > see:https://github.com/thesues/stri

Re: [ceph-users] Important security noticed regarding release signing key

2015-09-23 Thread wangsongbo
Hi Ken, Just now, I run teuthology-suites in our testing, it failed because of lacking these packages, such as qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64, qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph etc. The modify " rm ceph-extras repository config#137" remove the repository , but did not

Re: [ceph-users] Important security noticed regarding release signing key

2015-09-23 Thread wangsongbo
Hi Ken, Just now, I run teuthology-suites in our testing, it failed because of lacking these packages, such as qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64, qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph etc. The modify "rm ceph-extras repository config#137" only remove the repository , but did not solve

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-23 Thread Paul Mansfield
On 22/09/15 19:48, Jason Dillaman wrote: > It's not the best answer, but it is the reason why it is currently > disabled on RHEL 7. Best bet for finding a long-term solution is > still probably attaching with gdb and catching the abort function > call. Once the offending probe can be found, we ca

[ceph-users] failed to open http://apt-mirror.front.sepia.ceph.com

2015-09-23 Thread wangsongbo
Hi Loic and other Cephers, I am running teuthology-suites in our testing, because the connection to "apt-mirror.front.sepia.ceph.com" timed out , "ceph-cm-ansible" failed. And from a web-browser, I got the response like this : "502 Bad Gateway". "64.90.32.37 apt-mirror.front.sepia.ceph.com" has

[ceph-users] failed to open http://apt-mirror.front.sepia.ceph.com

2015-09-23 Thread wangsongbo
Hi Loic and other Cephers, I am running teuthology-suites in our testing, because the connection to "apt-mirror.front.sepia.ceph.com" timed out , "ceph-cm-ansible" failed. And from a web-browser, I got the response like this : "502 Bad Gateway". "64.90.32.37 apt-mirror.front.sepia.ceph.com" ha

[ceph-users] ceph.com IPv6 down

2015-09-23 Thread Olivier Bonvalet
Hi, since several hours http://ceph.com/ doesn't reply anymore in IPv6. It pings, and we can open TCP socket, but nothing more : ~$ nc -w30 -v -6 ceph.com 80 Connection to ceph.com 80 port [tcp/http] succeeded! GET / HTTP/1.0 Host: ceph.com But, a HEAD query works : ~$ n

Re: [ceph-users] ceph.com IPv6 down

2015-09-23 Thread Wido den Hollander
On 23-09-15 13:38, Olivier Bonvalet wrote: > Hi, > > since several hours http://ceph.com/ doesn't reply anymore in IPv6. > It pings, and we can open TCP socket, but nothing more : > > > ~$ nc -w30 -v -6 ceph.com 80 > Connection to ceph.com 80 port [tcp/http] succeeded! > GET / HTTP

[ceph-users] Antw: Hammer reduce recovery impact

2015-09-23 Thread Steffen Weißgerber
Based on the book 'Learning Ceph' (https://www.packtpub.com/application-development/learning-ceph), chapter performance tuning, we swapped the values for osd_recovery_op_priority and osd_client_op_priority to 60 and 40. "... osd recovery op priority: This is the priority set for recovery operati

Re: [ceph-users] Antw: Hammer reduce recovery impact

2015-09-23 Thread Dan van der Ster
On Wed, Sep 23, 2015 at 1:44 PM, Steffen Weißgerber wrote: > "... osd recovery op priority: This is > the priority set for recovery operation. Lower the number, higher the > recovery priority. > Higher recovery priority might cause performance degradation until recovery > completes. " > > So w

Re: [ceph-users] ceph.com IPv6 down

2015-09-23 Thread Olivier Bonvalet
Le mercredi 23 septembre 2015 à 13:41 +0200, Wido den Hollander a écrit : > Hmm, that is weird. It works for me here from the Netherlands via > IPv6: You're right, I checked from other providers and it works. So, a problem between Free (France) and Dreamhost ? ___

[ceph-users] Antw: Re: Antw: Hammer reduce recovery impact

2015-09-23 Thread Steffen Weißgerber
>>> Dan van der Ster schrieb am Mittwoch, 23. September 2015 um 14:04: > On Wed, Sep 23, 2015 at 1:44 PM, Steffen Weißgerber > wrote: >> "... osd recovery op priority: This is >> the priority set for recovery operation. Lower the number, higher the > recovery priority. >> Higher recovery pri

Re: [ceph-users] ceph-mon always election when change crushmap in firefly

2015-09-23 Thread Sage Weil
On Wed, 23 Sep 2015, Alexander Yang wrote: > hello, > We use Ceph+Openstack in our private cloud. In our cluster, we have > 5 mons and 800 osds, the Capacity is about 1Pb. And run about 700 vms and > 1100 volumes, > recently, we increase our pg_num , now the cluster have about 7

[ceph-users] cephfs filesystem size

2015-09-23 Thread Dan Nica
Hi, Can I set the size on cephfs ? when I mount the fs on the clients I see that the partition size is the whole cluster storage... Thank Dan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cephfs filesystem size

2015-09-23 Thread John Spray
Yes, you can set a quota on any directory, although it's only supported with the userspace client (i.e. ceph-fuse): http://docs.ceph.com/docs/master/cephfs/quota/ John On Wed, Sep 23, 2015 at 1:50 PM, Dan Nica wrote: > Hi, > > > > Can I set the size on cephfs ? when I mount the fs on the clients

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-23 Thread Jason Dillaman
It looks like the issue you are experiencing was fixed in the Infernalis/master branches [1]. I've opened a new tracker ticket to backport the fix to Hammer [2]. -- Jason Dillaman [1] https://github.com/sponce/ceph/commit/e4c27d804834b4a8bc495095ccf5103f8ffbcc1e [2] http://tracker.ceph.com

Re: [ceph-users] ceph-mon always election when change crushmap in firefly

2015-09-23 Thread Michael Kidd
Hello Alexander, One other point on your email.. You indicate you desire each OSD to have ~100 PGs, but depending on your pool size, it seems you may have forgetting about the additional PGs associated with replication itself. Assuming 3x replication in your environment: 70,000 * 3

Re: [ceph-users] Potential OSD deadlock?

2015-09-23 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 OK, here is the update on the saga... I traced some more of blocked I/Os and it seems that communication between two hosts seemed worse than others. I did a two way ping flood between the two hosts using max packet sizes (1500). After 1.5M packets,

Re: [ceph-users] Potential OSD deadlock?

2015-09-23 Thread Mark Nelson
FWIW, we've got some 40GbE Intel cards in the community performance cluster on a Mellanox 40GbE switch that appear (knock on wood) to be running fine with 3.10.0-229.7.2.el7.x86_64. We did get feedback from Intel that older drivers might cause problems though. Here's ifconfig from one of the

Re: [ceph-users] Potential OSD deadlock?

2015-09-23 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 We were able to only get ~17Gb out of the XL710 (heavily tweaked) until we went to the 4.x kernel where we got ~36Gb (no tweaking). It seems that there were some major reworks in the network handling in the kernel to efficiently handle that network r

[ceph-users] rgw cache lru size

2015-09-23 Thread Ben Hines
We have a ton of memory on our RGW servers, 96GB. Can someone explain how the rgw lru cache functions? It is worth bumping the 'rgw cache lru size' to a huge number? Our gateway seems to only be using about 1G of memory with the default setting. Also currently still using apache/fastcgi due to t

[ceph-users] rbd map failing for image with exclusive-lock feature

2015-09-23 Thread Allen Liao
Hi all, I'm unable to map a block device for an image that was created with exclusive-lock feature: $ sudo rbd create foo --size 4096 --image-features=4 --image-format=2 $ sudo rbd map foo rbd: sysfs write failed rbd: map failed: (6) No such device or address How do I map the image? I've tried

[ceph-users] Basic object storage question

2015-09-23 Thread Cory Hawkless
Hi all, I have basic question around how Ceph stores individual objects. Say I have a pool with a replica size of 3 and I upload a 1GB file to this pool. It appears as if this 1GB file gets placed into 3PG's on 3 OSD's , simple enough? Are individual objects never split up? What if I want to st

Re: [ceph-users] EU Ceph mirror changes

2015-09-23 Thread Matt Taylor
Apologies for the delay! au.ceph.com has been updated accordingly. Regards, Matthew. On 22/09/2015 00:03, Wido den Hollander wrote: Hi, Since the security notice regarding ceph.com the mirroring system broke. This meant that eu.ceph.com didn't serve new packages since the whole download syste

Re: [ceph-users] Basic object storage question

2015-09-23 Thread Cory Hawkless
Ok, so I have found this "The objects Ceph stores in the Ceph Storage Cluster are not striped. Ceph Object Storage, Ceph Block Device, and the Ceph Filesystem stripe their data over multiple Ceph Storage Cluster objects. Ceph Clients that write directly to the Ceph Storage Cluster via librados

Re: [ceph-users] how to get a mount list?

2015-09-23 Thread 黑铁柱
thanks 2015-09-21 17:29 GMT+08:00 John Spray : > I'm assuming you mean from the server: you can list the clients of an > MDS by SSHing to the server where it's running and doing "ceph daemon > mds. session ls". This has been in releases since Giant iirc. > > Cheers, > John > > On Mon, Sep 21, 20

[ceph-users] mon timeout

2015-09-23 Thread 黑铁柱
I can not connect my mon.0. probe timeout!! ceph version:0.80.7 I just have one mon server--->10.123.5.29:6789; why it find other mon? ceph.conf /// [global] auth service required = cephx filestore xattr use omap = true auth client required = cephx au

Re: [ceph-users] mon timeout

2015-09-23 Thread 黑铁柱
[root@10_123_5_29 /var/log/ceph]# ceph --admin-daemon /var/run/ceph/ceph-mon.0.asok mon_status { "name": "0", "rank": -1, "state": "probing", "election_epoch": 0, "quorum": [], "outside_quorum": [], "extra_probe_peers": [ "10.123.5.29:6789\/0"], "sync_provider": [], "monmap"

Re: [ceph-users] mon timeout

2015-09-23 Thread 黑铁柱
我发现这个问题是mon的名字不可以是单纯的一个数字导致的 2015-09-24 10:22 GMT+08:00 黑铁柱 : > [root@10_123_5_29 /var/log/ceph]# ceph --admin-daemon > /var/run/ceph/ceph-mon.0.asok mon_status > { "name": "0", > "rank": -1, > "state": "probing", > "election_epoch": 0, > "quorum": [], > "outside_quorum": [], > "extra

Re: [ceph-users] Basic object storage question

2015-09-23 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 If you use RADOS gateway, RBD or CephFS, then you don't need to worry about striping. If you write your own application that uses librados, then you have to worry about it. I understand that there is a radosstriper library that should help with that.