Hi Team,
*Problem:*
Create scheduled snapshots of the ceph subvolume.
*Expected Result:*
The scheduled snapshots should be created at the given scheduled time.
*Actual Result:*
The scheduled snapshots are not getting created till we create a manual
backup.
*Description:*
*Ceph versi
Ramin,
I think youre still going to experience what Casey described.
If your intent is to completely isolate bucket metadata/data in one
zonegroup from another, then I believe you need multiple independent
realms. Each with its own endpoint.
For instance;
Ceph Cluster A
Realm1/zonegroup1/zone1
Hi Chris / Gregory,
Did you get a chance to investigate this issue ?
Thanks and Regards
Sandip Divekar
From: Sandip Divekar
Sent: Thursday, May 25, 2023 11:16 PM
To: Chris Palmer ; ceph-users@ceph.io
Cc: d...@ceph.io; Gavin Lucas ; Joseph
Fernandes ; Simon Crosland
Subject: RE: [ceph-user
Hi!
I noticed the same that the snapshot scheduler seemed to do nothing , but after
a manager fail over the creation of snapshots started to work (including the
retention rules)..
Best regards,
Sake
From: Lokendra Rathour
Sent: Monday, May 29, 2023 10:11:54 AM
Hi,
I'm watching a cluster finish a bunch of backfilling, and I noticed that
quite often PGs end up with zero misplaced objects, even though they are
still backfilling.
Right now the cluster is down to 6 backfilling PGs:
data:
volumes: 1/1 healthy
pools: 6 pools, 268 pgs
objects:
An MDS-wide lock is acquired before the cache dump is done.
After the dump is complete, the lock is released.
So, the MDS freezing temporarily during the cache dump is expected.
On Fri, May 26, 2023 at 12:51 PM Emmanuel Jaep
wrote:
> Hi Milind,
>
> I finally managed to dump the cache and find
On 29/05/2023 20.55, Anthony D'Atri wrote:
> Check the uptime for the OSDs in question
I restarted all my OSDs within the past 10 days or so. Maybe OSD
restarts are somehow breaking these stats?
>
>> On May 29, 2023, at 6:44 AM, Hector Martin wrote:
>>
>> Hi,
>>
>> I'm watching a cluster finish
So fragmentation score calculation was improved recently indeed,
seehttps://github.com/ceph/ceph/pull/49885
And yeah one can see some fragmentation in allocations for the first two
OSDs. Doesn't look that dramatic as fragmentation scores tell though.
Additionally you might want to collect f
Hi Stefan,
given that allocation probes include every allocation (including short
4K ones) your stats look pretty high indeed.
Although you omitted historic probes so it's hard to tell if there is
negative trend in it..
As I mentioned in my reply to Hector one might want to make further
in
On 29/05/2023 22.26, Igor Fedotov wrote:
> So fragmentation score calculation was improved recently indeed, see
> https://github.com/ceph/ceph/pull/49885
>
>
> And yeah one can see some fragmentation in allocations for the first two
> OSDs. Doesn't look that dramatic as fragmentation scores tell
Hi,
I just restarted one of our mds servers. I can find some "progress" in logs
as below:
mds.beacon.icadmin006 Sending beacon up:replay seq 461
mds.beacon.icadmin006 received beacon reply up:replay seq 461 rtt 0
How I know how long is the sequence (ie. when the node will be finished
replaying)?
Hi,
Sorry for poking this old thread, but does this issue still persist in
the 6.3 kernels?
Cheers, Dan
__
Clyso GmbH | https://www.clyso.com
On Wed, Dec 7, 2022 at 3:42 AM William Edwards wrote:
>
>
> > Op 7 dec. 2022 om 11:59 heeft Stefan Kooman het volgende
>
Hi Dan,
We also experienced very high network usage and memory pressure with our
machine learning workload. This patch [1] (currently testing, may be merged in
6.5) may fix it. See [2] for more about my experiment about this issue.
[1]:
https://lkml.kernel.org/ceph-devel/20230515012044.98096-1
On 5/29/23 20:25, Dan van der Ster wrote:
Hi,
Sorry for poking this old thread, but does this issue still persist in
the 6.3 kernels?
We are running a mail cluster setup with 6.3.1 kernel and it's not
giving us any performance issues. We have not upgraded our shared
webhosting platform to th
14 matches
Mail list logo