Hi Greg,
We use EC:4+1 on 5 node cluster in production deployments with filestore
and it does recovery and peering when one OSD goes down. After few mins ,
other OSD from a node where the fault OSD exists will take over the PGs
temporarily and all PGs goes to active + clean state . Cluster also do
I guess this is cause you are using always the Same root tree.
Am 23. Januar 2017 10:50:16 MEZ schrieb Sascha Spreitzer :
>Hi all
>
>I reckognized ceph is rebalancing the whole crush map when i add osd's
>that should not affect any of my crush rulesets.
>
>Is there a way to add osd's to the crush
Hi,All,
I'm a new comer of Ceph, I deployed two ceph cluster, and one of which is
used as mirror cluster. When I created an image, I found that the primary
image blocked in 'up+stopped' status and the non-primary image's status is
'up+syncing`. I'm really not sure if this is in OK status and I rea
i was just testing an upgrade of some monitors in a test cluster from
hammer (0.94.7) to jewel (10.2.5). after upgrade each of the first two
monitors, i stopped and restarted a single osd to cause changes in the
maps. the same error messages showed up in ceph -w. i haven't dug into it
much but just
try a newer kernel. like 4.8
2017-01-24 0:37 GMT+08:00 Matthew Vernon :
> Hi,
>
> We have a 9-node ceph cluster, running 10.2.2 and kernel 4.4.0 (Ubuntu
> Xenial). We're seeing both machines freezing (nothing in logs on the
> machine, which is entirely unresponsive to anything except the power
>
On 23/01/17 16:40, Tu Holmes wrote:
> While I know this seems a silly question, are your monitoring nodes
> spec'd the same?
Oh, sorry, I should have said that. All 9 machines have osds on (1 per
disk); additionally 3 of the nodes are also mons and 3 (a different 3)
are rgws.
One of the freezing
While I know this seems a silly question, are your monitoring nodes spec'd
the same?
//Tu
On Mon, Jan 23, 2017 at 8:38 AM Matthew Vernon wrote:
> Hi,
>
> We have a 9-node ceph cluster, running 10.2.2 and kernel 4.4.0 (Ubuntu
> Xenial). We're seeing both machines freezing (nothing in logs on the
Hi,
We have a 9-node ceph cluster, running 10.2.2 and kernel 4.4.0 (Ubuntu
Xenial). We're seeing both machines freezing (nothing in logs on the
machine, which is entirely unresponsive to anything except the power
button) and suffering soft lockups.
Has anyone seen similar? Googling hasn't found a
Hello Wido and Shinobu,
On 20/01/2017 19:54, Shinobu Kinjo wrote:
> What does `ceph -s` say?
Health_ok; this was not the cause, thanks though.
> On Sat, Jan 21, 2017 at 3:39 AM, Wido den Hollander wrote:
>>
>>> Op 20 januari 2017 om 17:17 schreef Kai Storbeck :
>>>
>>> My graphs of several coun
Hi all
I reckognized ceph is rebalancing the whole crush map when i add osd's
that should not affect any of my crush rulesets.
Is there a way to add osd's to the crush map without having the cluster
change all the osd mappings (rebalancing)?
Or am i doing something wrong terribly?
How does this
10 matches
Mail list logo