On 02/16/2014 09:22 PM, Sage Weil wrote:
Hi Wido,
On Sun, 16 Feb 2014, Wido den Hollander wrote:
On 02/16/2014 06:49 PM, Gregory Farnum wrote:
Did you maybe upgrade that box to v0.67.6? This sounds like one of the
bugs Sage mentioned in it.
No, I checked it again. Version is: ceph version 0.
Hi,
Can I see your ceph.conf?
I suspect that [client.cinder] and [client.glance] sections are missing.
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire -
Hi All,
I've been looking, but haven't been able to find any detailed documentation
about the journal usage on OSDs. Does anyone have any detailed docs they could
share? My initial questions are:
Is the journal always write-only? (except under recovery)
I'm using BTRFS, in the default layout,
Could someone help me with the following error when I try to add
keyring entries:
# ceph -k /etc/ceph/ceph.client.admin.keyring auth add
client.radosgw.gateway -i /etc/ceph/keyring.radosgw.gateway
Error EINVAL: entity client.radosgw.gateway exists but key does not
match
#
Best,
G.
_
Hi,
that's by design. The monitor always listens to the public side only, if
a public network is defined. If you want everything in the cluster
network, just don't specify a seperate public/cluster network. But
that's all documented in great detail at
http://ceph.com/docs/master/rados/configuratio
I managed to solve my problem by deleting the key from the list and
re-adding it!
Best,
G.
On Mon, 17 Feb 2014 10:46:36 +0200, Georgios Dimitrakakis wrote:
Could someone help me with the following error when I try to add
keyring entries:
# ceph -k /etc/ceph/ceph.client.admin.keyring auth add
Could someone check this: http://pastebin.com/DsCh5YPm
and let me know what am I doing wrong?
Best,
G.
On Sat, 15 Feb 2014 20:27:16 +0200, Georgios Dimitrakakis wrote:
1) ceph -s is working as expected
# ceph -s
cluster c465bdb2-e0a5-49c8-8305-efb4234ac88a
health HEALTH_OK
mon
Dear sender, If you wish I read and respond to this e-mail for sure,
please, build subject like
KUDRYAVTSEV/Who wrote/Subject.
for example,
KUDRYAVTSEV/Bitworks/Some subject there...
Best wishes, Ivan Kudryavtsev
__
Dear sender, If you wish I read and respond to this e-mail for sure,
please, build subject like
KUDRYAVTSEV/Who wrote/Subject.
for example,
KUDRYAVTSEV/Bitworks/Some subject there...
Best wishes, Ivan Kudryavtsev
__
Dear sender, If you wish I read and respond to this e-mail for sure,
please, build subject like
KUDRYAVTSEV/Who wrote/Subject.
for example,
KUDRYAVTSEV/Bitworks/Some subject there...
Best wishes, Ivan Kudryavtsev
__
Hi all,
I just noticed that eu.ceph.com had some stale data since rsync wasn't
running with the --delete option.
I've just added it to the sync script and it's syncing right now,
shouldn't take that much time and should finish within the hour.
Btw, nice to see that ceph.com now also has a A
On 02/16/2014 05:18 PM, Sage Weil wrote:
Good catch!
It sounds like what is needed here is for the deb and rpm packages to add
/var/lib/ceph to the PRUNEPATHS in /etc/updatedb.conf. Unfortunately
there isn't a /etc/updatedb.conf.d type file, so that promises to be
annoying.
Has anyone done thi
Hi,
I am new user of ceph. I have installed a three node cluster following the ceph
document. I have added OSDs and initial monitor.
But while adding additional monitors, I am receiving this error as shown below.
user1@cephadmin:~/my-cluster$ ceph-deploy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Sebastian, Jean;
This is my ceph.conf looks like. It was auto generated using ceph-deploy.
[global]
fsid = afa13fcd-f662-4778-8389-85047645d034
mon_initial_members = ceph-node1
mon_host = 10.0.1.11
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filesto
Hi,
If cinder-volume fails to connect and putting the admin keyring works it means
that cinder is not configured properly.
Please also try to add the following:
[client.cinder]
keyring =
Same for Glance.
Btw: ceph.conf doesn’t need to be own by Cinder, just let mod +r and keep root
I posted this to ceph-devel-owner before seeing that this is the correct place
to post.
My company is trying to evaluate virtualized hdfs clusters using ceph as a
drop-in replacement for staging and development
following http://ceph.com/docs/master/cephfs/hadoop/. We deploy clusters
with ambari
Hi Kesten,
It's a little difficult to tell what the source of the problem is, but
looking at the gist you referenced, I don't see anything that would
indicate that Ceph is causing the issue. For instance,
hadoop-mapred-tasktracker-xxx-yyy-hdfs01.log looks like Hadoop daemons
are having problems co
I had some issues with OSD flapping after 2 days of recovery. It
appears to be related to swapping, even though I have plenty of RAM for
the number of OSDs I have. The cluster was completely unusable, and I
ended up rebooting all the nodes. It's been great ever since, but I'm
assuming it wil
On Mon, 17 Feb 2014 11:24:42 -0800 Craig Lewis wrote:
[kswapd going bersek]
>
> Any idea what happened? I'm assuming it will happen again if recovery
> takes long enough.
>
You're running into a well known, but poorly rectified (if at all) kernel
problem, there is little Ceph has to do with it
Hi all,
I've been playing with Ceph across high latency high speed links, with a range
of results.
In general, Ceph MDS, monitors, and OSDs are solid across thousand kilometre
network links. Jitter is low, latency is predictable, and capacity of the
network is well beyond what the servers can
21 matches
Mail list logo