Indeed that was the problem!
In case anyone else ever comes to the same condition please keep in
mind that no matter what you write at the "ceph-deploy" command it will
use at some point the the output from "hostname -s" and try to connect
to gather data from that monitor.
If you have changed
Could it be a problem that I have changed the hostname after the mon
creation?
What I mean is that
# hostname -s
ovhctrl
# ceph daemon mon.$(hostname -s) quorum_status
admin_socket: exception getting command descriptions: [Errno 2] No such
file or directory
But if I do it as "nefelus-cont
Hi,
looks like you haven’t run the ceph-deploy command with the same user name and
may be not the same current working directory. This could explain your problem.
Make sure the other daemons have a mgr cap authorisation. You can find on this
ML details about MGR caps being incorrect for OSDs an
I am still trying to figure what is the problem here...
Initially the cluster was updated ok...
# ceph health detail
HEALTH_WARN noout flag(s) set; all OSDs are running luminous or later
but require_osd_release < luminous; no active mgr
noout flag(s) set
all OSDs are running luminous or later
OK...now this is getting crazy...
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs:
Where has gone everything??
What's happening here?
G.
Indeed John,
you are right! I have updated "ceph-deploy" (which was installed vi
Indeed John,
you are right! I have updated "ceph-deploy" (which was installed via
"pip" that's why wasn't updated with the rest ceph packages) but now it
complaints that keys are missing
$ ceph-deploy mgr create controller
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/user/.
On Wed, Feb 28, 2018 at 5:21 PM, Georgios Dimitrakakis
wrote:
> All,
>
> I have updated my test ceph cluster from Jewer (10.2.10) to Luminous
> (12.2.4) using CentOS packages.
>
> I have updated all packages, restarted all services with the proper order
> but I get a warning that the Manager Daemo