Thanks,
I'll keep that in mind. I appreciate the assitance.
everything looks good this morning.
cluster df3f96d8-3889-4baa-8b27-cc2839141425
health HEALTH_OK
monmap e7: 3 mons at {Monitors}
election epoch 118, quorum 0,1,2 nodeB,nodeC,nodeD
osdmap e5246: 18 osds: 18 u
Hello,
On Thu, 1 Sep 2016 16:24:28 +0200 Ishmael Tsoaela wrote:
> I did set configure the following during my initial setup:
>
> osd pool default size = 3
>
Ah yes, so not this.
(though the default "rbd" pool that's initially created tended to ignore
that parameter and would default to 3 in an
I did set configure the following during my initial setup:
osd pool default size = 3
root@nodeC:/mnt/vmimages# ceph osd dump | grep "replicated size"
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 217 flags hashpspool
stripe_width
Hello,
On Thu, 1 Sep 2016 14:00:53 +0200 Ishmael Tsoaela wrote:
> more questions and I hope you don;t mind:
>
>
>
> My understanding is that if I have 3 hosts with 5 osd each, 1 host
> goes down, Ceph should not replicate to the osd that are down.
>
How could it replicate to something that i
more questions and I hope you don;t mind:
My understanding is that if I have 3 hosts with 5 osd each, 1 host
goes down, Ceph should not replicate to the osd that are down.
When the host comes up, only then the replication will commence right?
If only 1 osd out of 5 comes up, then only data mea
Thank you again.
I will add 3 more osd today and leave untouched, maybe over weekend.
On Thu, Sep 1, 2016 at 1:16 PM, Christian Balzer wrote:
>
> Hello,
>
> On Thu, 1 Sep 2016 11:20:33 +0200 Ishmael Tsoaela wrote:
>
>> thanks for the response
>>
>>
>>
>> > You really will want to spend more time
Hello,
On Thu, 1 Sep 2016 11:20:33 +0200 Ishmael Tsoaela wrote:
> thanks for the response
>
>
>
> > You really will want to spend more time reading documentation and this ML,
> > as well as using google to (re-)search things.
>
>
> I did do some reading on the error but cannot understand w
thanks for the response
> You really will want to spend more time reading documentation and this ML,
> as well as using google to (re-)search things.
I did do some reading on the error but cannot understand why they do
not clear even after so long.
> In your previous mail you already mention
Hello,
On Thu, 1 Sep 2016 10:18:39 +0200 Ishmael Tsoaela wrote:
> Hi All,
>
> Can someone please decipher this errors for me, after all nodes rebooted in
> my cluster on Monday. the warning has not gone.
>
You really will want to spend more time reading documentation and this ML,
as well as usi
Hi All,
Can someone please decipher this errors for me, after all nodes rebooted in
my cluster on Monday. the warning has not gone.
Will the warning ever clear?
cluster df3f96d8-3889-4baa-8b27-cc2839141425
health HEALTH_WARN
2 pgs backfill_toofull
532 pgs backfill
10 matches
Mail list logo