Are the scrubs eventually reported as "scrub ok" in the OSD logs? How
long do the scrubs take? Do you see updated timestamps in the 'ceph pg
dump' output (column DEEP_SCRUB_STAMP)?
Zitat von thymus_03fumb...@icloud.com:
I recently switched from 16.2.x to 18.2.x and migrated to cephadm,
sin
On 07/03/2024 08:52, Torkil Svensgaard wrote:
Hi
I tried to do offline read optimization[1] this morning but I am now
unable to map the RBDs in the pool.
I did this prior to running the pg-upmap-primary commands suggested by
the optimizer, as suggested by the latest documentation[2]:
"
c
Hello Ceph Users
Since we are running a big S3 cluster we would like to externalize the
RGW daemons that do async tasks, like:
* Garbage collection
* Lifecycle policies
* Calculating and updating quotas
Would this be possible to do in the configuration? Which config values
would I need to
On 13/02/2024 13:31, Torkil Svensgaard wrote:
Hi
Cephadm Reef 18.2.0.
We would like to remove our cluster_network without stopping the cluster
and without having to route between the networks.
global advanced cluster_network 192.168.100.0/24
*
global
Hi,
Yes. You need to turn off gc, lc threads in config for your current (client
side) RGW's.
Then setup your 'async tasks' RGW without client traffic. No special
configuration needed, only if I wanna tune gc, lc settings
k
Sent from my iPhone
> On 7 Mar 2024, at 13:09, Marc Singer wrote:
>
Hi,
I somehow missed your message thanks for your effort to raise this issue
Ansgar
Am Di., 16. Jan. 2024 um 10:05 Uhr schrieb Eugen Block :
>
> Hi,
>
> I don't really have an answer, I just wanted to mention that I created
> a tracker issue [1] because I believe there's a bug in the LRC plugin.
The slack workspace is bridged to our also-published irc channels. I
don't think we've done anything to enable xmpp (and two protocols is
enough work to keep alive!).
-Greg
On Wed, Mar 6, 2024 at 9:07 AM Marc wrote:
>
> Is it possible to access this also with xmpp?
>
> >
> > At the very bottom of
Hi,
TL;DR
Failure domain considered is data center. Cluster in stretch mode [1].
- What is the minimum amount of monitor nodes (apart from tie breaker)
needed per failure domain?
- What is the minimum amount of storage nodes needed per failure domain?
- Are device classes supported with str
On Thu, Mar 7, 2024 at 9:09 AM Stefan Kooman wrote:
>
> Hi,
>
> TL;DR
>
> Failure domain considered is data center. Cluster in stretch mode [1].
>
> - What is the minimum amount of monitor nodes (apart from tie breaker)
> needed per failure domain?
You need at least two monitors per site. This is
Hello everybody,
I'm encountering strange behavior on an infrastructure (it's pre-production
but it's very ugly). After a "drain" on monitor (and a manager). MGRs all
crash on startup:
Mar 07 17:06:47 pprod-mon1 ceph-mgr[564045]: mgr ms_dispatch2 standby
mgrmap(e 1310) v1
Mar 07 17:06:47 pprod-mo
What is this irc access then? Is there some webclient that can be used? Is this
ceph.io down? Can't get a website nor a ping.
>
> The slack workspace is bridged to our also-published irc channels. I
> don't think we've done anything to enable xmpp (and two protocols is
> enough work to keep ali
I took the wrong ligne =>
https://github.com/ceph/ceph/blob/v17.2.6/src/mon/MonClient.cc#L822
Le jeu. 7 mars 2024 à 18:21, David C. a écrit :
>
> Hello everybody,
>
> I'm encountering strange behavior on an infrastructure (it's
> pre-production but it's very ugly). After a "drain" on monitor (a
Ok, got it :
[root@pprod-admin:/var/lib/ceph/]# ceph mon dump -f json-pretty
|egrep "name|weigh"
dumped monmap epoch 14
"min_mon_release_name": "quincy",
"name": "pprod-mon2",
"weight": 10,
"name": "pprod-mon3",
"weight": 10,
"name":
I’m curious how the weights might have been changed. I’ve never
touched a mon weight myself, do you know how that happened?
Zitat von "David C." :
Ok, got it :
[root@pprod-admin:/var/lib/ceph/]# ceph mon dump -f json-pretty
|egrep "name|weigh"
dumped monmap epoch 14
"min_mon_release_name
On 07-03-2024 18:16, Gregory Farnum wrote:
On Thu, Mar 7, 2024 at 9:09 AM Stefan Kooman wrote:
Hi,
TL;DR
Failure domain considered is data center. Cluster in stretch mode [1].
- What is the minimum amount of monitor nodes (apart from tie breaker)
needed per failure domain?
You need at lea
anything we can do to narrow down the policy issue here? any of the
Principal, Action, Resource, or Condition matches could be failing
here. you might try replacing each with a wildcard, one at a time,
until you see the policy take effect
On Wed, Dec 13, 2023 at 5:04 AM Marc Singer wrote:
>
> Hi
I think heartbeats will failover to the public network if the private doesn't
work -- may not have always done that.
>> Hi
>> Cephadm Reef 18.2.0.
>> We would like to remove our cluster_network without stopping the cluster and
>> without having to route between the networks.
>> global
some monitors have existed for many years (weight 10) others have been
added (weight 0)
=> https://github.com/ceph/ceph/commit/2d113dedf851995e000d3cce136b69
bfa94b6fe0
Le jeudi 7 mars 2024, Eugen Block a écrit :
> I’m curious how the weights might have been changed. I’ve never touched a
> mon
Hi everyone,
Ceph Days are coming to New York City again this year, co-hosted by
Bloomberg Engineering and Clyso!
We're planning a full day of Ceph content, well timed to learn about the
latest and greatest Squid release.
https://ceph.io/en/community/events/2024/ceph-days-nyc/
We're opening the
Oh, dude! You opened my eyes! I thought (it is written this way in
documentation) that all the commands need to be executed under cephadm shell.
That is why I always ran 'cephadm shell' first, falling down into container
env, and then all the rest.
Where can I read about proper usage of cephadm t
good news
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
When reshard occurs, io will be blocked, why has this serious problem not been
solved?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Public
We ran into this issue last week when upgrading to quincy. We asked ourselves
the same question: how did the weight change, as we did not even know that was
a thing.
We checked our other clusters and we have some where all the mons have a weight
of 10, and there it is not an issue. So
Thanks! That's very interesting to know!
Zitat von "David C." :
some monitors have existed for many years (weight 10) others have been
added (weight 0)
=> https://github.com/ceph/ceph/commit/2d113dedf851995e000d3cce136b69
bfa94b6fe0
Le jeudi 7 mars 2024, Eugen Block a écrit :
I’m curious h
Hi All,
The subject pretty much says it all: I need to use cephfs-shell and its
not installed on my Ceph Node, and I can't seem to locate which package
contains it - help please. :-)
Cheers
Dulux-Oz
___
ceph-users mailing list -- ceph-users@ceph.i
25 matches
Mail list logo