Hi!
I noticed the same that the snapshot scheduler seemed to do nothing , but after
a manager fail over the creation of snapshots started to work (including the
retention rules)..
Best regards,
Sake
From: Lokendra Rathour
Sent: Monday, May 29, 2023 10:11:54 AM
Just a user opinion, maybe add the following additions to the options?
For option 1:
* Clear instructions how to remove all traces to the failed installation (if
you can automate it, you can write a manual) or provide instructions to start a
cleanup script.
* Don't allow another deployment of Ce
Thanks, will keep an eye out for this version. Will report back to this thread
about these options and the recovery time/number of objects per second for
recovery.
Again, thank you'll for the information and answers!
___
ceph-users mailing list -- ceph
If I glance at the commits to the quincy branch, shouldn't the mentioned
configurations be included in 17.2.7?
The requested command output:
[ceph: root@mgrhost1 /]# ceph version
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
[ceph: root@mgrhost1 /]# ceph config s
I'm on 17.2.6, but the option "osd_mclock_max_sequential_bandwidth_hdd" isn't
available when I try to set it via "ceph config set osd.0
osd_mclock_max_sequential_bandwidth_hdd 500Mi".
I need to use large numbers for hdd, because it looks like the mclock scheduler
isn't using the device class ov
Did an extra test with shutting down an osd host and force a recovery. Only
using the iops setting I got 500 objects a second, but using also the
bytes_per_usec setting, I got 1200 objects a second!
Maybe there should also be an investigation about this performance issue.
Best regards
Thanks for the input! Changing this value we indeed increased the recovery
speed from 20 object per second to 500!
Now something strange:
1. We needed to change the class for our drives manually to ssd.
2. The setting "osd_mclock_max_capacity_iops_ssd" was set to 0. With osd bench
descriped in t
Just to add:
high_client_ops: around 8-13 objects per second
high_recovery_ops: around 17-25 objects per second
Both observed with "watch - n 1 - c ceph status"
Best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
Hi,
The config shows "mclock_scheduler" and I already switched to the
high_recovery_ops, this does increase the recovery ops, but only a little.
You mention there is a fix in 17.2.6+, but we're running on 17.2.6 (this
cluster is created on this version). Any more ideas?
Best regards
_
We noticed extremely slow performance when remapping is necessary. We didn't do
anything special other than assigning the correct device_class (to ssd). When
checking ceph status, we notice the number of objects recovering is around
17-25 (with watch -n 1 -c ceph status).
How can we increase
From: Sake Paulusma
Sent: Monday, February 13, 2023 6:52:45 PM
To: Gregory Farnum
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Health warning - POOL_TARGET_SIZE_BYTES_OVERCOMMITED
Hey Greg,
I'm just analyzing this issue and it isn't strange the total cluster size is
half the tot
us, the cluster size
From: Gregory Farnum
Sent: Monday, February 13, 2023 5:32:18 PM
To: Sake Paulusma
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Health warning - POOL_TARGET_SIZE_BYTES_OVERCOMMITED
On Mon, Feb 13, 2023 at 4:16 AM Sake Paulusma wrote:
&
The RATIO for cephfs.application-acc.data shouldn't be over 1.0, I believe this
triggered the error.
All weekend I was thinking about this issue, but couldn't find an option to
correct this.
But minutes after posting I found a blog about the autoscaler
(https://ceph.io/en/news/blog/2022/autosc
Hello,
I configured a stretched cluster on two datacenters. It's working fine, except
this weekend the Raw Capicity exceeded 50% and the error
POOL_TARGET_SIZE_BYTES_OVERCOMMITED showed up.
The command "ceph df" is showing the correct cluster size, but "ceph osd pool
autoscale-status" is showi
The instructions work great, the monitor is added in the monmap now.
I asked about the Tiebreaker because there is a special command to replace the
current one. But this manual intervention is probably still needed to first set
the correct location. Will report back later when I replace the curr
That isn't a great solution indeed, but I'll try the solution. Would this also
be necessary to replace the Tiebreaker?
From: Adam King
Sent: Friday, December 2, 2022 2:48:19 PM
To: Sake Paulusma
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] How to
I succesfully setup a stretched cluster, except the CRUSH rule mentioned in the
docs wasn't correct. The parameters for "min_size" and "max_size" should be
removed, or else the rule can't be imported.
Second there should be a mention about setting the monitor crush location takes
sometime and kn
Hi
I noticed that cephadm would update the grafana-frontend-api-url with version
17.2.3, but it looks broken with version 17.2.5. It isn't a big deal to update
the url by myself, but it's quite irritating to do if in the past it corrected
itself.
Best regards,
Sake
I fixed the issue by removing the blanco/not labeled disk. It is still a bug,
so hopefully it can get fixed for someone else who can't easily remove a disk :)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
October 24, 2022 5:50:20 PM
To: Sake Paulusma
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Failed to probe daemons or devices
Hello Sake,
Could you share the output of vgs / lvs commands?
Also, I would suggest you to open a tracker [1]
Thanks!
[1]
https://tracker.ceph.com/projects/cep
Last friday I upgrade the Ceph cluster from 17.2.3 to 17.2.5 with "ceph orch
upgrade start --image
localcontainerregistry.local.com:5000/ceph/ceph:v17.2.5-20221017". After
sometime, an hour?, I've got a health warning: CEPHADM_REFRESH_FAILED: failed
to probe daemons or devices. I'm using only C
Another shot, company mail server did something special...
I deployed a small cluster for testing/deploying CephFS with cephadm. I was
wondering if it's possible to balance the active and standby daemons on hosts.
The service configuration:
service_type: mds
service_id: test-fs
service_name: mds
22 matches
Mail list logo