Hi,
I'm trying to estimate the possible impact when large PGs are
splitted. Here's one example of such a PG:
PG_STAT OBJECTS BYTES OMAP_BYTES* OMAP_KEYS* LOG DISK_LOGUP
86.3ff277708 4144030984090 0 3092
3092
[187,166,122,226,171,234,17
Hi Eugene!
I have a case, where PG with millions of objects, like this
```
root@host# ./show_osd_pool_pg_usage.sh | less | head
id used_mbytes used_objects omap_used_mbytes omap_used_keys
-- --- --
17.c91 1213.24827
Hi,
Can you please explain what S3 operations you perform on the RGW?
I tried the 2nd script with "make bucket" and "put object", and got:
2024-04-09T13:54:00.608+ 7f93b3987640 20 Lua INFO: bucket operation
logs:
2024-04-09T13:54:00.608+ 7f93b3987640 20 Lua INFO: Name: fish
2024-04-09T13
I did end up writing a unit test to see what we calculated here, as well as
adding a bunch of debug logging (haven't created a PR yet, but probably
will). The total memory was set to (19858056 * 1024 * 0.7) (total memory
in bytes * the autotune target ratio) = 14234254540. What ended up getting
lo
Hi Adam
Let me just finish tucking in a devlish tyke here and i’ll get to it first
thing
tirs. 9. apr. 2024 kl. 18.09 skrev Adam King :
> I did end up writing a unit test to see what we calculated here, as well
> as adding a bunch of debug logging (haven't created a PR yet, but probably
> will).
Hi Adam
Seems like the mds_cache_memory_limit both set globally through cephadm and
the hosts mds daemons are all set to approx. 4gb
root@my-ceph01:/# ceph config get mds mds_cache_memory_limit
4294967296
same if query the individual mds daemons running on my-ceph01, or any of
the other mds daemon
Hi,
Thank you for any time.
We are tracking some very slow pwrite64() calls to a ceph filesystem -
20965 11:04:24.049186 <... pwrite64 resumed>) = 65536 <4.489594>
20966 11:04:24.069765 <... pwrite64 resumed>) = 65536 <4.508859>
20967 11:04:24.090354 <... pwrite64 resumed>) = 65536 <4.510256>
B
Hi
We are getting a lot of odd warnings from the alert module by email,
alerts which never seem to show up on "ceph -s".
Here's a somewhat funny one:
Date: Tue, 09 Apr 2024 16:48:06 -
"
--- New ---
[WARN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive, 1 pg
peering
pg
The same experiment with the mds daemons pulling 4GB instead of the 16GB,
and me fixing the starting total memory (I accidentally used the
memory_available_kb instead of memory_total_kb the first time) gives us
*DEBUGcephadm.autotune:autotune.py:35 Autotuning OSD memory w
Hi,
I appreciate your message, it really sounds tough (9 months,
really?!). But thanks for the reassurance :-)
They don’t have any other options so we’ll have to start that process
anyway, probably tomorrow. We’ll see how it goes…
Zitat von Konstantin Shalygin :
Hi Eugene!
I have a case,
Hi everyone,
At the next Ceph User + Dev Monthly Meetup (Wednesday, April 17, 7:00 –
8:00 am PT), we'll go over the results of the Ceph Users Feedback Survey
that we did last month. If you missed taking the survey, feel free to go
over the questions attached in the email below. We look forward to
On 4/8/24 12:32, Erich Weiler wrote:
Ah, I see. Yes, we are already running version 18.2.1 on the server side (we
just installed this cluster a few weeks ago from scratch). So I guess if the
fix has already been backported to that version, then we still have a problem.
Dos that mean it coul
Dos that mean it could be the locker order bug
(https://tracker.ceph.com/issues/62123) as Xiubo suggested?
I have raised one PR to fix the lock order issue, if possible please
have a try to see could it resolve this issue.
Thank you! Yeah, this issue is happening every couple days now. It
On 4/10/24 11:48, Erich Weiler wrote:
Dos that mean it could be the locker order bug
(https://tracker.ceph.com/issues/62123) as Xiubo suggested?
I have raised one PR to fix the lock order issue, if possible please
have a try to see could it resolve this issue.
Thank you! Yeah, this issue i
Den tis 9 apr. 2024 kl 10:39 skrev Eugen Block :
> I'm trying to estimate the possible impact when large PGs are
> splitted. Here's one example of such a PG:
>
> PG_STAT OBJECTS BYTES OMAP_BYTES* OMAP_KEYS* LOG DISK_LOGUP
> 86.3ff277708 4144030984090 0
Thank you, Janne.
I believe the default 5% target_max_misplaced_ratio would work as
well, we've had good experience with that in the past, without the
autoscaler. I just haven't dealt with such large PGs, I've been
warning them for two years (when the PGs were only almost half this
size) a
16 matches
Mail list logo