g this
part of CEPH a little better.
Met vriendelijke groet,
Kind Regards,
Maarten van Ingen
Specialist |SURF |maarten.vanin...@surf.nl
<mailto:voornaam.achtern...@surf.nl>| T +31 30 88 787 3000 |M +31 6 19 03 90
19|
SURF <http://www.surf.nl/> is the collaborative organisation
x27;s the odd part.
Met vriendelijke groet,
Kind Regards,
Maarten van Ingen
Specialist |SURF |maarten.vanin...@surf.nl
<mailto:voornaam.achtern...@surf.nl>| T +31 30 88 787 3000 |M +31 6 19 03 90
19|
SURF <http://www.surf.nl/> is the collaborative organisation for ICT in Dutch
educa
#x27;s done. We are keeping the osd_max_backfills on 1
for now, setting it higher would ofc mean it should be done much faster. But
also it would mean bigger impact on performance on the cluster.
Met vriendelijke groet,
Kind Regards,
Maarten van Ingen
Specialist |SURF |maarten.v
d limiting the data movement by the set 1%
So it is safe to assume I can put the number to 4096 and the total amount of
misplaced PG's keeps around 1%.
Met vriendelijke groet,
Kind Regards,
Maarten van Ingen
Specialist |SURF |maarten.vanin...@surf.nl
<mailto:voornaam.achtern...@surf.nl&
for 1% misplaced objects, to limit this as
well. If that’s true, could I just set pgp_num to 4096 directly and CEPH limits
the data movement by itself?
We are running a fully updated Nautilus cluster.
Met vriendelijke groet,
Kind Regards,
Maarten van Ingen
Specialist |SURF |maarten.vanin...@s
do is make these the
same. Increasing in one go means 50% of data being moved: 500TiB user or 1.5PiB
raw storage.
Can we increase this by, say 200 until we hit 4k to limit the amount of data
being moved in a single go.
Or is this not advisable?
Met vriendelijke groet,
Kind Regards,
Maarten van Inge
ata in the cluster -- I fear it might be
disruptive for several weeks at best, and at worst you may hit that pg log OOM
bug).
>> But this bug would not hit with a manual increase?
Cheers, Dan
> On 02/07/2022 1:15 PM Maarten van Ingen wrote:
>
>
> Hi Dan,
>
> Her
Hi Robert,
Am 07.02.22 um 13:15 schrieb Maarten van Ingen:
> As it's just a few pools affected, doing a manual increase would be and
> option for me as well, if recommended.
>
> As you can see one pool is basically lacking pg's while the others are mostly
> increasi
higher target_bytes compared to the current usage.
Van: Dan van der Ster
Verzonden: maandag 7 februari 2022 12:53
Aan: Maarten van Ingen; ceph-users
Onderwerp: Re: [ceph-users] Advice on enabling autoscaler
Dear Maarten,
For a cluster that size, I w
just to enable the autoscaler. We will enable
it per pool to limit affected pools.
Met vriendelijke groet,
Kind Regards,
Maarten van Ingen
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ser
(much).
Maarten van Ingen
| Systems Expert | Distributed Data Processing | SURFsara | Science Park 140 |
1098 XG Amsterdam |
| T +31 (0) 20 800 1300 | maarten.vanin...@surfsara.nl | https://surfsara.nl |
We are ISO 27001 certified and meet the high requirements for information
ut:
cluster:
id: <
health: HEALTH_OK
services:
mon: 3 daemons, quorum mon01,mon02,mon03
mgr: mon01(active), standbys: mon02, mon03
mds: cephfs-2/2/2 up {0=mon03=up:active,1=mon01=up:active}, 1 up:standby
osd: 502 osds: 502 up, 502 in
data:
pools: 18 pools, 8192 pgs
obj
12 matches
Mail list logo