Sent: Wednesday, May 10, 2017 10:14 AM
To: Piotr Nowosielski
mailto:piotr.nowosiel...@allegrogroup.com>>;
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] All OSD fails after few
r
lestore merge and
>>> >> split".
>>> >>
>>> >> Some explain:
>>> >> The OSD, after reaching a certain number of files in the directory
>>> >> (it depends of 'filestore merge threshold' and 'filestore split
>&
m <mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] All OSD fails after few requests to RGW
When I created cluster, I made a mistake in configuration,
and set split
parameter to 32 and merge to 40, so 32*40*16 = 20480 files
per folder.
ot been
>> migrated.
>> crushmap settings? Weight of OSD?
>>
>> One thing is certain - you will not find any information about the split
>> process in the logs ...
>>
>> pn
>>
>> -Original Message-----
>> From: Anton Dmitriev [
r.nowosiel...@allegrogroup.com>>;
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] All OSD fails after few requests to RGW
When I created cluster, I made a mistake in configuration, and set
split
parameter to 32 and merge t
, May 10, 2017 10:14 AM
> To: Piotr Nowosielski ;
> ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] All OSD fails after few requests to RGW
>
> When I created cluster, I made a mistake in configuration, and set split
> parameter to 32 and merge to 40, so 32*40*16 = 20480 files pe
: Wednesday, May 10, 2017 10:14 AM
To: Piotr Nowosielski ;
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] All OSD fails after few requests to RGW
When I created cluster, I made a mistake in configuration, and set split
parameter to 32 and merge to 40, so 32*40*16 = 20480 files per folder.
After that
h-users@lists.ceph.com
Subject: Re: [ceph-users] All OSD fails after few requests to RGW
How did you solved it? Set new split/merge thresholds, and manually applied
it by ceph-objectstore-tool --data-path
/var/lib/ceph/osd/ceph-${osd_num} --journal-path
/var/lib/ceph/osd/ceph-${osd_num}/jo
rastruktury 5
Grupa Allegro sp. z o.o.
Tel: +48 512 08 55 92
-Original Message-
From: Anton Dmitriev [mailto:t...@enumnet.ru]
Sent: Wednesday, May 10, 2017 9:19 AM
To: Piotr Nowosielski ;
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] All OSD fails after few requests to RGW
How di
.@lists.ceph.com] On Behalf Of
Anton Dmitriev
Sent: Wednesday, May 10, 2017 8:14 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] All OSD fails after few requests to RGW
Hi!
I increased pg_num and pgp_num for pool default.rgw.buckets.data from
2048 to 4096, and it seems that situation beca
m w wysokości 33 976 500,00 zł, posiadająca numer identyfikacji
podatkowej NIP: 5272525995.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Anton Dmitriev
Sent: Wednesday, May 10, 2017 8:14 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-
Hi!
I increased pg_num and pgp_num for pool default.rgw.buckets.data from
2048 to 4096, and it seems that situation became a bit better, cluster
dies after 20-30 PUTs, not after 1. Could someone please give me some
recommendations how to rescue the cluster?
On 27.04.2017 09:59, Anton Dmitri
Cluster was going well for a long time, but on the previous week osds
start to fail.
We use cluster like image storage for Opennebula with small load and
like object storage with high load.
Sometimes disks of some osds utlized by 100 %, iostat shows avgqu-sz
over 1000, while reading or writing a
13 matches
Mail list logo