Forgot the link to the schedule
https://ceph2022.sched.com/
On Mon, Feb 7, 2022 at 4:31 PM Mike Perez wrote:
>
> We can’t wait to gather the Ceph community again in person, and we’re
> sure you feel the same way. We had hoped to do this April 5-7, but
> unfortunately, given the current COVID-19
We can’t wait to gather the Ceph community again in person, and we’re
sure you feel the same way. We had hoped to do this April 5-7, but
unfortunately, given the current COVID-19 pandemic wave, we feel that
this may not be the right time to gather in person. We want to ensure
a safe experience onsi
On 2/7/22 12:34 PM, Alexander E. Patrakov wrote:
пн, 7 февр. 2022 г. в 17:30, Robert Sander :
And keep in mind that when PGs are increased that you also may need to
increase the number of OSDs as one OSD should carry a max of around 200
PGs. But I do not know if that is still the case with curr
пн, 7 февр. 2022 г. в 17:30, Robert Sander :
> And keep in mind that when PGs are increased that you also may need to
> increase the number of OSDs as one OSD should carry a max of around 200
> PGs. But I do not know if that is still the case with current Ceph versions.
This is just the default li
Hi,
I am migrating from filestore to bluestore (workflow is draining osd,
and reformat it with bluestore)
Now I have two OSDs which crashes to the same time with the following
error. Restarting of the OSD works for some time until they crash
again.
-40> 2022-02-07 16:28:20.489 7f550723a700 20
Hello Ceph-Users!
On 22/12/2021 00:38, Stefan Schueffler wrote:
The other Problem, regarding the OSD scrub errors, we have this:
ceph health detail shows „PG_DAMAGED: Possible data damage: x pgs
inconsistent.“
Every now and then new pgs get inconsistent. All inconsistent pgs
belong to the buc
> On 02/07/2022 1:51 PM Maarten van Ingen wrote:
> One more thing -- how many PGs do you have per OSD right now for the nvme and
> hdd roots?
> Can you share the output of `ceph osd df tree` ?
>
> >> This is only 1347 lines of text, you sure you want that :-) On a summary
> >> for HDD we have b
Hi Dan,
--
Hi,
OK, you don't need to set 'warn' mode -- the autoscale status already has the
info we need.
One more thing -- how many PGs do you have per OSD right now for the nvme and
hdd roots?
Can you share the output of `ceph osd df tree` ?
>> This is only 1347 lines of text, you sure yo
Hi Robert,
Am 07.02.22 um 13:15 schrieb Maarten van Ingen:
> As it's just a few pools affected, doing a manual increase would be and
> option for me as well, if recommended.
>
> As you can see one pool is basically lacking pg's while the others are mostly
> increasing due to the much higher tar
Hi,
OK, you don't need to set 'warn' mode -- the autoscale status already has the
info we need.
One more thing -- how many PGs do you have per OSD right now for the nvme and
hdd roots?
Can you share the output of `ceph osd df tree` ?
Generally, the autoscaler is trying to increase your pools s
Am 07.02.22 um 13:15 schrieb Maarten van Ingen:
As it's just a few pools affected, doing a manual increase would be and option
for me as well, if recommended.
As you can see one pool is basically lacking pg's while the others are mostly
increasing due to the much higher target_bytes compared t
Hi Dan,
Here's the output. I removed pool names on purpose.
SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS
PG_NUM NEW PG_NUM AUTOSCALE
19 100.0T 3.0 11098T 0.0270 1.0
256off
104.5G 1024G 3.0
Dear Maarten,
For a cluster that size, I would not immediately enable the autoscaler but
first enabled it in "warn" mode to sanity check what it would plan to do:
# ceph osd pool set pg_autoscale_mode warn
Please share the output of "ceph osd pool autoscale-status" so we can help
guide what y
Hey Marc,
Some more information went to the "Ceph Performance very bad even in
Memory?!" topic.
Greetings
On Mon, Feb 7, 2022 at 11:48 AM Marc wrote:
>
> >
> > I gave up on this topic.. ceph does not properly support it. Even though
> > it
> > seems really promising.
> >
> > Tested a ping on 4
Hi,
We are about to enable the PG autoscaler on CEPH. Currently we are running the
latest subrelease of Nautilus with Bluestore and LVM. The current status of the
autoscaler is that it’s turned off on all pools and the module is enabled.
To make sure we do not kill anything, performance and/or
>
> I gave up on this topic.. ceph does not properly support it. Even though
> it
> seems really promising.
>
> Tested a ping on 40gbit with rdma ud which took for both ways 6us.
>
> rdma rd i didnt got running (maybe my cards need some special
> threatment..).
>
> tcp-ping at mtu 9000 took 2
Hey,
I gave up on this topic.. ceph does not properly support it. Even though it
seems really promising.
Tested a ping on 40gbit with rdma ud which took for both ways 6us.
rdma rd i didnt got running (maybe my cards need some special threatment..).
tcp-ping at mtu 9000 took 20us
tcp-ping at mtu
Did you add the configuration directly to the conf?
I see that other people's posts need to be recompiled after adding rdma.
I'm also going to try rdma mode now, but haven't found any more info.
sascha a. 于2022年2月1日周二 20:31写道:
> Hey,
>
> I Recently found this RDMA feature of ceph. Which I'm curr
18 matches
Mail list logo