://www.thermofisher.com
-Original Message-
From: Tregaron Bayly [mailto:tba...@bluehost.com]
Sent: Friday, July 11, 2014 1:53 PM
To: Tuite, John E.
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph mount not working anymore
"Ceph is short of Cephalopod, a class of mollusks
7292
>
> Mobile 412-897-3401
>
> Fax 412-490-9401
>
> john.tu...@thermofisher.com
>
> http://www.thermofisher.com
>
>
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> Of Joshua McClintock
> Sent: Friday, July 11, 2014 1:44 PM
>
Hello Alfredo, isn't this what the 'ceph-release-1-0.el6.noarch' package is
for in my rpm -qa list? Here are the yum repo files I have in
/etc/yum.repos.d. I don't see any priorities in the ceph one which is
where libcephfs1 comes from I think. I tried to 'yum reinstall
ceph-release', but the fi
Thanks Sage! What had happened prior to me upgrading was that I added an
erasure coded pool, but all my OSDs began to crash. The ec profile didn't
seem to cause the crash, so I left it, but once I removed the pool, the
crashes stopped.
Do you guys want any of the core dumps, or is anything short
On Thu, 10 Jul 2014, Joshua McClintock wrote:
> { "rule_id": 1,
> "rule_name": "erasure-code",
> "ruleset": 1,
> "type": 3,
The presence of the erasure code CRUSH rules it what is preventing the
kernel client from mounting. Upgrade to a newer kernel (3.14 I
Joshua it looks like you got Ceph from EPEL (that version has the '-2'
slapped on it). And it is why you are seeing this
for ceph:
ceph-0.80.1-2.el6.x86_64
And this for others:
libcephfs1-0.80.1-0.el6.x86_64
Make sure that you do get Ceph from our repos. Newer versions of
ceph-deploy fix this b
[root@chefwks01 ~]# ceph --cluster us-west01 osd crush dump
{ "devices": [
{ "id": 0,
"name": "osd.0"},
{ "id": 1,
"name": "osd.1"},
{ "id": 2,
"name": "osd.2"},
{ "id": 3,
"name": "osd.3"},
{ "id": 4,
That is CEPH_FEATURE_CRUSH_V2. Can you attach teh output of
ceph osd crush dump
Thanks!
sage
On Thu, 10 Jul 2014, Joshua McClintock wrote:
> Yes, I change some of the mount options on my osds (xfs mount options), but
> I think this may be the answer from dmesg, sorta looks like a version
> m
Yes, I change some of the mount options on my osds (xfs mount options), but
I think this may be the answer from dmesg, sorta looks like a version
mismatch:
libceph: loaded (mon/osd proto 15/24)
ceph: loaded (mds proto 32)
libceph: mon0 192.168.0.14:6789 feature set mismatch, my 4a042aca <
server
Have you made any other changes after the upgrade? (Like adjusting
tunables, or creating EC pools?)
See if there is anything in 'dmesg' output.
sage
On Thu, 10 Jul 2014, Joshua McClintock wrote:
> I upgraded my cluster to .80.1-2 (CentOS). My mount command just freezes
> and outputs an error
I upgraded my cluster to .80.1-2 (CentOS). My mount command just freezes
and outputs an error:
mount.ceph 192.168.0.14,192.168.0.15,192.168.0.16:/ /us-west01 -o
name=chefwks01,secret=`ceph-authtool -p -n client.admin
/etc/ceph/us-west01.client.admin.keyring`
mount error 5 = Input/output error
11 matches
Mail list logo