But FreeNAS is based on FreeBSD.
Em dom, 18 de dez de 2016 00:40, ZHONG escreveu:
> Thank you for your reply。
>
> 在 2016年12月17日,22:21,Jake Young 写道:
>
> FreeNAS running in KVM Linux hypervisor
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.cep
,
> Corentin BONNETON
>
> Le 21 nov. 2016 à 03:18, Bruno Silva a écrit :
>
> I killed the structure
>
> Em dom, 20 de nov de 2016 às 20:37, Corentin Bonneton
> escreveu:
>
> Hello,
>
> Please return command :
>
> ceph pg 0.38 query
>
>
> --
Thanks for the atention. But, the consult returned:
"down_osds_we_would_probe" and "peering_blocked_by" empty
Em dom, 20 de nov de 2016 às 23:18, Bruno Silva
escreveu:
> I killed the structure
>
> Em dom, 20 de nov de 2016 às 20:37, Corentin Bonneton
> escrev
I killed the structure
Em dom, 20 de nov de 2016 às 20:37, Corentin Bonneton
escreveu:
> Hello,
>
> Please return command :
>
> ceph pg 0.38 query
>
>
> --
> Cordialement,
> Corentin BONNETON
>
> Le 19 nov. 2016 à 23:54, Bruno Silva a écrit :
>
> I d
I have a lot of stuck and down+incomplete and incomplete, but on pg query
doesn't show where is the fail
ceph health detail
HEALTH_WARN clock skew detected on mon.3; 3 pgs down; 6 pgs incomplete; 6
pgs stuck inactive; 6 pgs stuck unclean; 17 requests are blocked > 32 sec;
3 osds have slow requests
I don't know what can i do to solve this.
I try force create pg.
I try deactivate osd.
I add new disks.
And nothing do this scenario change.
# ceph health detail
HEALTH_WARN 1 pgs down; 6 pgs incomplete; 6 pgs stuck inactive; 70 pgs
stuck unclean; 7 requests are blocked > 32 sec; 3 osds have slow
And finally works.
Thanks.
Now i need to see another erros. My Cluster is very problematic.
Em sáb, 19 de nov de 2016 às 19:12, Bruno Silva
escreveu:
> I put and didn't works, in the end i put an osd with id 5 in production.
>
>
> Em sáb, 19 de nov de 2016 às 17:46, Paweł Sad
d. If not you can
> also use 'ceph osd lost ID' but OSD with that ID must exists in crushmap
> (and this probably not the case here).
>
> On 19.11.2016 13:46, Bruno Silva wrote:
> > Version: Hammer
> > On my cluster a pg is saying:
> >
Version: Hammer
On my cluster a pg is saying:
"down_osds_we_would_probe": [
5
],
But this osd was removed. How can i solve this.
Reading on group list ceph-users they say that this could be the reason to
my cluster is stoped.
How can i solve this?
_
So, I did it now. And removed another one.
ceph health detail
HEALTH_WARN 1 pgs down; 6 pgs incomplete; 6 pgs stuck inactive; 6 pgs stuck
unclean; 3 requests are blocked > 32 sec; 2 osds have slow requests
pg 0.3 is stuck inactive for 249715.738300, current state incomplete, last
acting [1,4,6]
pg
he cluster
> pain. Post your crush map and the experts here maybe able to advise
> but with a cluster of this size you may have issues getting it back to
> a healthy state if 1 osd is causing problems...
>
>
>
> ..
>
>
>
> On Fri, Nov 18, 2016 at 10:51 PM, Bruno Sil
I have a Cluster with 5 nodes Ceph. For some reason the sync down and now I
don't know what i can do to restore it.
# ceph -s
cluster 338bc0a5-c2f7-4c0a-9b35-25c7afee50c6
health HEALTH_WARN
1 pgs down
6 pgs incomplete
6 pgs stuck inactive
6 p
12 matches
Mail list logo