On 5/16/23 21:55, Gregory Farnum wrote:
On Fri, May 12, 2023 at 5:28 AM Frank Schilder wrote:
Dear Xiubo and others.
I have never heard about that option until now. How do I check that and how to
I disable it if necessary?
I'm in meetings pretty much all day and will try to send some more i
On 5/11/23 20:12, Frank Schilder wrote:
Dear Xiubo,
please see also my previous e-mail about the async dirop config.
I have a bit more log output from dmesg on the file server here:
https://pastebin.com/9Y0EPgDD .
There is one bug in kclient and it forgets to skip the memories for
'snap_re
Hi Yixin,
I support experimentation for sure, but if we want to consider a feature
for inclusion, we need design proposal(s) and review, of course. If you're
interested in feedback on your current ideas, you could consider coming to
the "refactoring" meeting on a Wednesday. I think these ideas w
Hi folks,
I created a feature request ticket to call for bucket-level redirect_zone
(https://tracker.ceph.com/issues/61199), which is basically an extension from
zone-level redirect_zone. I found it helpful in realizing CopyObject with
(x-copy-source) in multisite environments where bucket cont
Hi
Upgraded from Pacific 16.2.5 to 17.2.6 on May 8th
However, Grafana fails to start due to bad folder path
:/tmp# journalctl -u
ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201 -n 25
-- Logs begin at Sun 2023-05-14 20:05:52 UTC, end at Tue 2023-05-16 19:07:51
UTC. --
May 16 19
Did an extra test with shutting down an osd host and force a recovery. Only
using the iops setting I got 500 objects a second, but using also the
bytes_per_usec setting, I got 1200 objects a second!
Maybe there should also be an investigation about this performance issue.
Best regards
Thanks for the input! Changing this value we indeed increased the recovery
speed from 20 object per second to 500!
Now something strange:
1. We needed to change the class for our drives manually to ssd.
2. The setting "osd_mclock_max_capacity_iops_ssd" was set to 0. With osd bench
descriped in t
Hi Sake,
We are experiencing the same. I set “osd_mclock_cost_per_byte_usec_hdd” to 0.1
(default is 2.6) and get about 15 times backfill speed, without significant
affect client IO. This parameter seems calculated wrongly, from the description
5e-3 should be a reasonable value for HDD (correspo
Hi users,
Correction: The User + Dev Monthly Meeting is happening *this week* on
*Thursday,
May 18th* *@* *14:00 UTC*. Apologies for the confusion.
See below for updated meeting details.
Thanks,
Laura Flores
Meeting link: https://meet.jit.si/ceph-user-dev-monthly
Time conversions:
UTC:
Hello everyone,
Join us on May 24th at 17:00 UTC for a long overdue Ceph Tech Talk! This month,
Yuval Lifshitz will give an RGW Lua Scripting Code Walkthrough.
https://ceph.io/en/community/tech-talks/
You can also see Yuval's previous presentation at Ceph Month 2021, From Open
Source to Open E
Hi Ceph Users,
The User + Dev Monthly Meeting is coming up next week on *Thursday, May
17th* *@* *14:00 UTC* (time conversions below). See meeting details at the
bottom of this email.
Please add any topics you'd like to discuss to the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
Just to add:
high_client_ops: around 8-13 objects per second
high_recovery_ops: around 17-25 objects per second
Both observed with "watch - n 1 - c ceph status"
Best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
Hi,
The config shows "mclock_scheduler" and I already switched to the
high_recovery_ops, this does increase the recovery ops, but only a little.
You mention there is a fix in 17.2.6+, but we're running on 17.2.6 (this
cluster is created on this version). Any more ideas?
Best regards
_
Hi,
Might be a dumb question …
I'm wondering how I can set those config variables in some but not all RGW
processes?
I'm on a cephadm 17.2.6. On 3 nodes I have RGWs. The ones on 8080 are behind
haproxy for users. the ones one 8081 I'd like for sync only.
# ceph orch ps | grep rgw
rgw.max.maxvm
Hello,
This is a known issue in quincy which uses mClock scheduler. There is a fix
for this which should be
available in 17.2.6+ releases.
You can confirm the active scheduler type on any osd using:
ceph config show osd.0 osd_op_queue
If the active scheduler is 'mclock_scheduler', you can try
On Fri, May 12, 2023 at 5:28 AM Frank Schilder wrote:
>
> Dear Xiubo and others.
>
> >> I have never heard about that option until now. How do I check that and
> >> how to I disable it if necessary?
> >> I'm in meetings pretty much all day and will try to send some more info
> >> later.
> >
> >
On 5/16/23 17:44, Frank Schilder wrote:
Hi Xiubo,
forgot to include these, the inodes i tried to dump and which caused a crash are
ceph tell "mds.ceph-10" dump inode 2199322355147 <-- original file/folder
causing trouble
ceph tell "mds.ceph-10" dump inode 2199322727209 <-- copy also causing t
On 5/16/23 00:33, Frank Schilder wrote:
Dear Xiubo,
I uploaded the cache dump, the MDS log and the dmesg log containing the
snaptrace dump to
ceph-post-file: 763955a3-7d37-408a-bbe4-a95dc687cd3f
Okay, thanks.
Sorry, I forgot to add user and description this time.
A question about troubl
We noticed extremely slow performance when remapping is necessary. We didn't do
anything special other than assigning the correct device_class (to ssd). When
checking ceph status, we notice the number of objects recovering is around
17-25 (with watch -n 1 -c ceph status).
How can we increase
Hi Mark!
Thank you very much for this message, acknowledging the problem publicly is the
beginning of fixing it ❤️
> On 11 May 2023, at 17:38, Mark Nelson wrote:
>
> Hi Everyone,
>
> This email was originally posted to d...@ceph.io, but Marc mentioned that he
> thought this would be useful t
Hi Xiubo,
forgot to include these, the inodes i tried to dump and which caused a crash are
ceph tell "mds.ceph-10" dump inode 2199322355147 <-- original file/folder
causing trouble
ceph tell "mds.ceph-10" dump inode 2199322727209 <-- copy also causing trouble
(after taking snapshot??)
Other fo
Thanks, they upgraded to 15.2.17 a few months ago, upgrading further
is currently not possible.
See ceph trash purge schedule * commands to check for this.
If you mean 'rbd trash purge schedule' commands, there are no
schedules defined, but lots of images seem to be okay. On a different
On 5/16/23 09:47, Eugen Block wrote:
I'm still looking into these things myself but I'd appreciate anyone
chiming in here.
IIRC the configuration of the trash purge schedule has changed in one of
the Ceph releases (not sure which one). Have they recently upgaraded to
a new(er) release? Do t
Good morning,
I would be grateful if anybody could shed some light on this, I can't
reproduce it in my lab clusters so I was hoping for the community.
A customer has 2 clusters with rbd mirroring (snapshots) enabled, it
seems to work fine, they have regular checks and the images on the
remo
Hi Eugen,
Yes, sure, no problem to share it. I attach it to this email (as it may
clutter the discussion if inline).
If somebody on the list has some clue on the LRC plugin, I'm still
interested by understand what I'm doing wrong!
Cheers,
Michel
Le 04/05/2023 à 15:07, Eugen Block a écrit
25 matches
Mail list logo