Hi Kevin

looks like the pb comes from the mgr user itself then. 

Can you get me the output of 
- ceph auth list 
- cat /etc/ceph/ceph.conf on your mgr node

Regards

JC

While moving. Excuse unintended typos.

> On Dec 20, 2017, at 18:40, kevin parrikar <kevin.parker...@gmail.com> wrote:
> 
> Thanks JC,
> I tried 
> ceph auth caps client.admin osd 'allow *' mds 'allow *' mon 'allow *' mgr 
> 'allow *'
> 
> but still status is same,also  mgr.log is being flooded with below errors.
> 
> 2017-12-21 02:39:10.622834 7fb40a22b700  0 Cannot get stat of OSD 140
> 2017-12-21 02:39:10.622835 7fb40a22b700  0 Cannot get stat of OSD 141
> Not sure whats wrong in my setup
> 
> Regards,
> Kevin
> 
> 
>> On Thu, Dec 21, 2017 at 2:37 AM, Jean-Charles Lopez <jelo...@redhat.com> 
>> wrote:
>> Hi,
>> 
>> make sure client.admin user has an MGR cap using ceph auth list. At some 
>> point there was a glitch with the update process that was not adding the MGR 
>> cap to the client.admin user.
>> 
>> JC
>> 
>> 
>>> On Dec 20, 2017, at 10:02, kevin parrikar <kevin.parker...@gmail.com> wrote:
>>> 
>>> hi All,
>>> I have upgraded the cluster from Hammer to Jewel and to Luminous .
>>> 
>>> i am able to upload/download glance images but ceph -s shows 0kb used and 
>>> Available and probably because of that cinder create is failing.
>>> 
>>> 
>>> ceph -s
>>>   cluster:
>>>     id:     06c5c906-fc43-499f-8a6f-6c8e21807acf
>>>     health: HEALTH_WARN
>>>             Reduced data availability: 6176 pgs inactive
>>>             Degraded data redundancy: 6176 pgs unclean
>>> 
>>>   services:
>>>     mon: 3 daemons, quorum controller3,controller2,controller1
>>>     mgr: controller3(active)
>>>     osd: 71 osds: 71 up, 71 in
>>>     rgw: 1 daemon active
>>> 
>>>   data:
>>>     pools:   4 pools, 6176 pgs
>>>     objects: 0 objects, 0 bytes
>>>     usage:   0 kB used, 0 kB / 0 kB avail
>>>     pgs:     100.000% pgs unknown
>>>              6176 unknown
>>> 
>>> 
>>> i deployed ceph-mgr using ceph-deploy gather-keys && ceph-deploy mgr create 
>>> ,it was successfull but for some reason ceph -s is not showing correct 
>>> values.
>>> Can some one help me here please
>>> 
>>> Regards,
>>> Kevin
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to