I have a small ceph 0.72.2 cluster built with ceph-deploy and running on
ubuntu 12.04, this cluster is used as primary storage for my home openstack
sandbox.

I'm running into an issue I haven't seen before and have had a heck of a
time searching for similar issues as "None" doesn't exactly make a good
keyword.


On one node, when I run any ceph command that interacts with the cluster, I
get the appropriate output, but "None" is prepended to it.

root@os2:/etc/ceph# ceph health
None
HEALTH_OK


root@os2:/etc/ceph# ceph
None
ceph>


Again, this only happens on one of the four ceph nodes. I've verified conf
files, keys, perms, versions, etc. match on all nodes, no connectivity
issues, etc. In fact the ceph cluster is still healthy and working great
with only one exception. Cinder-Volume also runs on this node and since
"None" is also getting prepended to json formatted output, Cinder-Volume
errors out in _get_mon_addrs() when json decoder chokes on the response
from ceph.  (I'll probably throw a quick pre-decode band-aid on that method
to get Cinder back online until I can correct this)

here's my config sans radosgw... although it hasn't changed recently.

[global]
fsid = 02a4abf4-3659-4525-bfe8-f1f5ea024030
mon_initial_members = fs1,os1,cortex,os2
mon_host = 10.10.3.8,10.10.3.10,10.10.3.7,10.10.3.20
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true
public_network = 10.10.3.0/24
cluster_network = 10.10.150.0/24


I've tried everything I can think of, hoping someone here can point out
what I'm missing.

Thanks
zeb
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to