Hello Igor,
> You didn't reset the counters every hour, do you? So having average
> subop_w_latency growing that way means the current values were much higher
> than before.
bummer, I didn't.. I've updated gather script to reset stats, wait 10m and then
gather perf data, each hour. It's running si
Curiously what is the umask and directory permission in your case? add a
host for the cluster for further try?
Eugen Block 于2023年5月9日周二 14:59写道:
> Hi,
>
> I just retried without the single-host option and it worked. Also
> everything under /tmp/var belongs to root in my case. Unfortunately, I
>
umask here is 027. The pr should fix the problem above. no more fix needed
and wait for point release.
Adam King 于2023年5月10日周三 05:52写道:
> What's the umask for the "deployer" user? We saw an instance of someone
> hitting something like this, but for them it seemed to only happen when
> they had c
Thanks!
An upgrade from 16.2.12 on Ubuntu 20.04 LTS went smoothly.
/Z
On Wed, 10 May 2023 at 00:45, Yuri Weinstein wrote:
> We're happy to announce the 13th backport release in the Pacific series.
>
> https://ceph.io/en/news/blog/2023/v16-2-13-pacific-released/
>
> Notable Changes
> --
Thank you, Igor. I will try to see how to collect the perf values. Not sure
about restarting all OSDs as it's a production cluster, is there a less
invasive way?
/Z
On Tue, 9 May 2023 at 23:58, Igor Fedotov wrote:
> Hi Zakhar,
>
> Let's leave questions regarding cache usage/tuning to a differen
On 5/9/23 16:23, Frank Schilder wrote:
Dear Xiubo,
both issues will cause problems, the one reported in the subject
(https://tracker.ceph.com/issues/57244) and the potential follow-up on MDS
restart
(https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LYY7TBK63XPR6X6TD7372I2YEPJO
which I think was merged too late* (as in the patch wouldn't be in 17.2.6)
On Tue, May 9, 2023 at 5:52 PM Adam King wrote:
> What's the umask for the "deployer" user? We saw an instance of someone
> hitting something like this, but for them it seemed to only happen when
> they had changed the um
What's the umask for the "deployer" user? We saw an instance of someone
hitting something like this, but for them it seemed to only happen when
they had changed the umask to 027. We had patched in
https://github.com/ceph/ceph/pull/50736 to address it, which I don't think
was merged too late for the
We're happy to announce the 13th backport release in the Pacific series.
https://ceph.io/en/news/blog/2023/v16-2-13-pacific-released/
Notable Changes
---
* CEPHFS: Rename the `mds_max_retries_on_remount_failure` option to
`client_max_retries_on_remount_failure` and move it from mds
Hi Zakhar,
Let's leave questions regarding cache usage/tuning to a different topic
for now. And concentrate on performance drop.
Could you please do the same experiment I asked from Nikola once your
cluster reaches "bad performance" state (Nikola, could you please use
this improved scenario
Hi Yuval,
Just a follow up on this.
An issue I’ve just resolved is getting scripts into the cephadm shell. As
it turns out - I didn’t know this be it seems the host file system is
mounted into the cephadm shell at /rootfs/.
So I've been editing a /tmp/preRequest.lua on my host and then running:
East and West Clusters have been upgraded to quincy, 17.2.6.
We are still seeing replication failures. Deep diving the logs, I found the
following interesting items.
What is the best way to continue to troubleshoot this?
What is the curl attempting to fetch, but failing to obtain?
-
Because pacific has performance issues
>
> Curious, why not go to Pacific? You can upgrade up to 2 major releases
> in a go.
>
>
> The upgrade process to pacific is here:
> https://docs.ceph.com/en/latest/releases/pacific/#upgrading-non-cephadm-
> clusters
> The upgrade to Octopus is here:
> ht
Curious, why not go to Pacific? You can upgrade up to 2 major releases in a
go.
The upgrade process to pacific is here:
https://docs.ceph.com/en/latest/releases/pacific/#upgrading-non-cephadm-clusters
The upgrade to Octopus is here:
https://docs.ceph.com/en/latest/releases/octopus/#upgrading-from-
Folks,
I am trying to install ceph on 10 node clusters and planning to use
cephadm. My question is if next year i will add new nodes to this cluster
then what docker image version cephadm will use to add new nodes?
Are there any local registry can i create one to copy images locally? How
does cep
Dear Xiubo,
both issues will cause problems, the one reported in the subject
(https://tracker.ceph.com/issues/57244) and the potential follow-up on MDS
restart
(https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LYY7TBK63XPR6X6TD7372I2YEPJO2L6F).
Either one will cause compute jobs
When you say cache device, do you mean a ceph cache pool as a tier to a rep-2
pool? If so, you might want to reconsider, cache pools are deprecated and will
be removed from ceph at some point.
If you have funds to buy new drives, you can just as well deploy a beegfs (or
something else) on these
Dear Dan,
I'm one of the users for whom this is an on-off experience. I had a period
where everything worked fine only to get bad again; see my reply from October
25 2022 to the dev-thread "Ceph Leadership Team meeting 2022-09-14". Over the
last few days I made a similar experience. For 1 day,
>
> Hi, I want to upgrade my old Ceph cluster + Radosgw from v14 to v15. But
> I'm not using cephadm and I'm not sure how to limit errors as much as
> possible during the upgrade process?
Maybe check the changelog, check upgrading notes, and continuosly monitor the
mailing list?
I have to do the
And I just tried with docker as well, works too.
Zitat von Eugen Block :
Hi,
I just retried without the single-host option and it worked. Also
everything under /tmp/var belongs to root in my case. Unfortunately,
I can't use the curl-based cephadm but the contents are identical, I
compare
Hi,
I just retried without the single-host option and it worked. Also
everything under /tmp/var belongs to root in my case. Unfortunately, I
can't use the curl-based cephadm but the contents are identical, I
compared. Not sure what it could be at the moment.
Zitat von Ben :
Hi, It is uo
21 matches
Mail list logo