Hi, all
I am trying to use cephfs as a replacement drop-in for hadoop hdfs. Maily
configure steps according to doc here :
http://docs.ceph.com/docs/master/cephfs/hadoop/
I am using a 3 node hadoop 2.7.1 cluster. Noting the official doc recommend
using 1.1.x stable release, I am not sure
if usi
Hi,
I have four storage machines to build a ceph storage cluster as storage
nodes. Each of them is attached a 120 GB HDD and a 1 TB HDD. Is it OK to think
that those storage devices are same when write a ceph.conf?
For example, when setting osd pool default pg num , I thought: os
On 22/09/15 19:48, Jason Dillaman wrote:
>> On 22/09/15 17:46, Jason Dillaman wrote:
>>> As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph
>>> project only due to the fact that EPEL 7 doesn't provide the required
>>> packages [1].
>>
>> interesting. so basically our program m
On 23-09-15 03:49, Dan Mick wrote:
> On 09/22/2015 05:22 AM, Sage Weil wrote:
>> On Tue, 22 Sep 2015, Wido den Hollander wrote:
>>> Hi,
>>>
>>> After the recent changes in the Ceph website the IPv6 connectivity got lost.
>>>
>>> www.ceph.com
>>> docs.ceph.com
>>> download.ceph.com
>>> git.ceph.co
Hi,
thanks very much for posting that, much appreciated.
We were able to build and test it on Red Hat EL7.
On 17/09/15 04:01, 张冬卯 wrote:
>
> Hi,
>
> src/tools/rados.c has some striper rados snippet.
>
> and I have this little project using striper rados.
> see:https://github.com/thesues/stri
Hi Ken,
Just now, I run teuthology-suites in our testing, it failed because
of lacking these packages, such as
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64,
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph etc.
The modify " rm ceph-extras repository config#137" remove the
repository , but did not
Hi Ken,
Just now, I run teuthology-suites in our testing, it failed because of lacking
these packages,
such as qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64,
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph etc.
The modify "rm ceph-extras repository config#137" only remove the repository ,
but did not solve
On 22/09/15 19:48, Jason Dillaman wrote:
> It's not the best answer, but it is the reason why it is currently
> disabled on RHEL 7. Best bet for finding a long-term solution is
> still probably attaching with gdb and catching the abort function
> call. Once the offending probe can be found, we ca
Hi Loic and other Cephers,
I am running teuthology-suites in our testing, because the connection to
"apt-mirror.front.sepia.ceph.com" timed out , "ceph-cm-ansible" failed.
And from a web-browser, I got the response like this : "502 Bad Gateway".
"64.90.32.37 apt-mirror.front.sepia.ceph.com" has
Hi Loic and other Cephers,
I am running teuthology-suites in our testing, because the connection to
"apt-mirror.front.sepia.ceph.com" timed out , "ceph-cm-ansible" failed.
And from a web-browser, I got the response like this : "502 Bad Gateway".
"64.90.32.37 apt-mirror.front.sepia.ceph.com" ha
Hi,
since several hours http://ceph.com/ doesn't reply anymore in IPv6.
It pings, and we can open TCP socket, but nothing more :
~$ nc -w30 -v -6 ceph.com 80
Connection to ceph.com 80 port [tcp/http] succeeded!
GET / HTTP/1.0
Host: ceph.com
But, a HEAD query works :
~$ n
On 23-09-15 13:38, Olivier Bonvalet wrote:
> Hi,
>
> since several hours http://ceph.com/ doesn't reply anymore in IPv6.
> It pings, and we can open TCP socket, but nothing more :
>
>
> ~$ nc -w30 -v -6 ceph.com 80
> Connection to ceph.com 80 port [tcp/http] succeeded!
> GET / HTTP
Based on the book 'Learning Ceph'
(https://www.packtpub.com/application-development/learning-ceph),
chapter performance tuning, we swapped the values for osd_recovery_op_priority
and osd_client_op_priority to 60 and 40.
"... osd recovery op priority: This is
the priority set for recovery operati
On Wed, Sep 23, 2015 at 1:44 PM, Steffen Weißgerber
wrote:
> "... osd recovery op priority: This is
> the priority set for recovery operation. Lower the number, higher the
> recovery priority.
> Higher recovery priority might cause performance degradation until recovery
> completes. "
>
> So w
Le mercredi 23 septembre 2015 à 13:41 +0200, Wido den Hollander a écrit
:
> Hmm, that is weird. It works for me here from the Netherlands via
> IPv6:
You're right, I checked from other providers and it works.
So, a problem between Free (France) and Dreamhost ?
___
>>> Dan van der Ster schrieb am Mittwoch, 23.
September 2015
um 14:04:
> On Wed, Sep 23, 2015 at 1:44 PM, Steffen Weißgerber
> wrote:
>> "... osd recovery op priority: This is
>> the priority set for recovery operation. Lower the number, higher
the
> recovery priority.
>> Higher recovery pri
On Wed, 23 Sep 2015, Alexander Yang wrote:
> hello,
> We use Ceph+Openstack in our private cloud. In our cluster, we have
> 5 mons and 800 osds, the Capacity is about 1Pb. And run about 700 vms and
> 1100 volumes,
> recently, we increase our pg_num , now the cluster have about 7
Hi,
Can I set the size on cephfs ? when I mount the fs on the clients I see that
the partition size is the whole cluster storage...
Thank
Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Yes, you can set a quota on any directory, although it's only
supported with the userspace client (i.e. ceph-fuse):
http://docs.ceph.com/docs/master/cephfs/quota/
John
On Wed, Sep 23, 2015 at 1:50 PM, Dan Nica wrote:
> Hi,
>
>
>
> Can I set the size on cephfs ? when I mount the fs on the clients
It looks like the issue you are experiencing was fixed in the Infernalis/master
branches [1]. I've opened a new tracker ticket to backport the fix to Hammer
[2].
--
Jason Dillaman
[1]
https://github.com/sponce/ceph/commit/e4c27d804834b4a8bc495095ccf5103f8ffbcc1e
[2] http://tracker.ceph.com
Hello Alexander,
One other point on your email.. You indicate you desire each OSD to have
~100 PGs, but depending on your pool size, it seems you may have forgetting
about the additional PGs associated with replication itself.
Assuming 3x replication in your environment:
70,000 * 3
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
OK, here is the update on the saga...
I traced some more of blocked I/Os and it seems that communication
between two hosts seemed worse than others. I did a two way ping flood
between the two hosts using max packet sizes (1500). After 1.5M
packets,
FWIW, we've got some 40GbE Intel cards in the community performance
cluster on a Mellanox 40GbE switch that appear (knock on wood) to be
running fine with 3.10.0-229.7.2.el7.x86_64. We did get feedback from
Intel that older drivers might cause problems though.
Here's ifconfig from one of the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
We were able to only get ~17Gb out of the XL710 (heavily tweaked)
until we went to the 4.x kernel where we got ~36Gb (no tweaking). It
seems that there were some major reworks in the network handling in
the kernel to efficiently handle that network r
We have a ton of memory on our RGW servers, 96GB.
Can someone explain how the rgw lru cache functions? It is worth
bumping the 'rgw cache lru size' to a huge number?
Our gateway seems to only be using about 1G of memory with the default setting.
Also currently still using apache/fastcgi due to t
Hi all,
I'm unable to map a block device for an image that was created with
exclusive-lock feature:
$ sudo rbd create foo --size 4096 --image-features=4 --image-format=2
$ sudo rbd map foo
rbd: sysfs write failed
rbd: map failed: (6) No such device or address
How do I map the image? I've tried
Hi all,
I have basic question around how Ceph stores individual objects.
Say I have a pool with a replica size of 3 and I upload a 1GB file to this
pool. It appears as if this 1GB file gets placed into 3PG's on 3 OSD's , simple
enough?
Are individual objects never split up? What if I want to st
Apologies for the delay!
au.ceph.com has been updated accordingly.
Regards,
Matthew.
On 22/09/2015 00:03, Wido den Hollander wrote:
Hi,
Since the security notice regarding ceph.com the mirroring system broke.
This meant that eu.ceph.com didn't serve new packages since the whole
download syste
Ok, so I have found this
"The objects Ceph stores in the Ceph Storage Cluster are not striped. Ceph
Object Storage, Ceph Block Device, and the Ceph Filesystem stripe their data
over multiple Ceph Storage Cluster objects. Ceph Clients that write directly to
the Ceph Storage Cluster via librados
thanks
2015-09-21 17:29 GMT+08:00 John Spray :
> I'm assuming you mean from the server: you can list the clients of an
> MDS by SSHing to the server where it's running and doing "ceph daemon
> mds. session ls". This has been in releases since Giant iirc.
>
> Cheers,
> John
>
> On Mon, Sep 21, 20
I can not connect my mon.0. probe timeout!!
ceph version:0.80.7
I just have one mon server--->10.123.5.29:6789;
why it find other mon?
ceph.conf
///
[global]
auth service required = cephx
filestore xattr use omap = true
auth client required = cephx
au
[root@10_123_5_29 /var/log/ceph]# ceph --admin-daemon
/var/run/ceph/ceph-mon.0.asok mon_status
{ "name": "0",
"rank": -1,
"state": "probing",
"election_epoch": 0,
"quorum": [],
"outside_quorum": [],
"extra_probe_peers": [
"10.123.5.29:6789\/0"],
"sync_provider": [],
"monmap"
我发现这个问题是mon的名字不可以是单纯的一个数字导致的
2015-09-24 10:22 GMT+08:00 黑铁柱 :
> [root@10_123_5_29 /var/log/ceph]# ceph --admin-daemon
> /var/run/ceph/ceph-mon.0.asok mon_status
> { "name": "0",
> "rank": -1,
> "state": "probing",
> "election_epoch": 0,
> "quorum": [],
> "outside_quorum": [],
> "extra
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
If you use RADOS gateway, RBD or CephFS, then you don't need to worry
about striping. If you write your own application that uses librados,
then you have to worry about it. I understand that there is a
radosstriper library that should help with that.
34 matches
Mail list logo