Of course it's possible. You can either change this rule by extracting
the crushmap, decompiling it, editing the "take" section, compile it
and inject it back into the cluster. Or you simply create a new rule
with the class hdd specified and set this new rule for your pools. So
the first ap
Hello team ,
I am creating new cluster which will be created using CEPHADM, I will use
192.168.1.0/24 as public network and 10.10.90.0/24 as internal network for
osd , mon communication . would like if this command is helpful as it is my
first time to use caphadm . sudo cephadm bootstrap --mon-ip
Hi Laimis,
I apologize for not paying attention to the Reddit link/discussion in your
previous message. Forget about osd_scrub_chunk_max. It's very unlikely to
explain why scrubbing is so slow that it doesn't progress (if at all) for many
v19.2 users.
Given the number of testimonies and recent
I know this is not a good test, but when I dd to a rbd image like this
[@ ~]# dd if=/dev/zero of=/dev/rbd0 bs=1M count=100 status=progress
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.399629 s, 262 MB/s
It is a big cache difference doing this via the tcmu-runn
There aren’t Nautilus packages for those releases, AFAIKT?
https://discourse.ubuntu.com/t/supported-ceph-versions/18799
They seem to have jumped over both Luminous and Mimic to Octopus. Upstream
tends to advise not updating Ceph more than two major releases in one step, so
the OP’s question
On 2024-11-27 16:54, Pardhiv Karri wrote:
Hi,
I am in a tricky situation. Our current OSD nodes (luminous version) are on
the latest Dell servers, which only support Ubuntu 20.04.
What do you mean “only support Ubuntu 20.04”? Just upgrade to 22.04 and
then to 24.04.
--
Sarunas Burdulis
Dart
Hi,
I am in a tricky situation. Our current OSD nodes (luminous version) are on
the latest Dell servers, which only support Ubuntu 20.04. The Luminous
packages were installed on 16.04, so the packages are still Xenial; I later
upgraded the OS to 20.04 and added OSDs to the cluster. Now, I am tryin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
In your situation the JJ Balancer might help.
>
> On 2024-11-27 17:53, Anthony D'Atri wrote:
>>> Hi,
>>> My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges from
>>> about 50 up to 100 PG's per OSD. This is far from balanced.
>> Do you have multiple CRUSH roots or device classes
We use rclone here exclusively. (previously used to use mc)
On 2024-11-15 22:45, Orange, Gregory (Pawsey, Kensington WA) wrote:
We have a lingering fondness for Minio's mc client mc, and previously
recommended it to users of our RGW clusters. In certain uses however
performance was much poorer t
So the balancer is working as expected; it is normal that it does or
cannot further balance?
Any other suggestions here?
On 2024-11-27 18:05, Anthony D'Atri wrote:
In your situation the JJ Balancer might help.
On 2024-11-27 17:53, Anthony D'Atri wrote:
Hi,
My Ceph cluster is out-of-balance
Hi,
is it possible to set/change following already used rule to only use hdd?
{
"rule_id": 1,
"rule_name": "ec32",
"type": 3,
"steps": [
{
"op": "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"
On 2024-11-27 17:53, Anthony D'Atri wrote:
Hi,
My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges
from about 50 up to 100 PG's per OSD. This is far from balanced.
Do you have multiple CRUSH roots or device classes? Are all OSDs the
same weight?
Yes, I have 2 CRUSH roots
>
> Hi,
>
> My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges from
> about 50 up to 100 PG's per OSD. This is far from balanced.
Do you have multiple CRUSH roots or device classes? Are all OSDs the same
weight?
> My disk sizes differs from 1.6T up to 2.4T.
Ah. The numb
Do you have osd_scrub_begin_hour / osd_scrub_end_hour set? Constraining times
when scrubs can run can result in them piling up.
Are you saying that an individual PG may take 20+ elapsed days to perform a
deep scrub?
> Might be the result of osd_scrub_chunk_max now being 15 instead of 25
> p
Hi,
My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges
from about 50 up to 100 PG's per OSD. This is far from balanced.
Today, I have enabled the balancer module. But unfortunately, it doesn't
want to balance:
"Unable to find further optimization, or pool(s) pg_num is decrea
Failed: Clients can not be defined until a HA configuration has been defined
(>2 gateways)
Who cares when I am testing I am fully aware I only entered 1.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users
Pour info, meme si pas en rapport avec nos pbs je pense puisqu'on tourne
v18...
Michel
Message transféré
Sujet : [ceph-users] Re: Squid: deep scrub issues
Date : Wed, 27 Nov 2024 17:15:32 +0100 (CET)
De :Frédéric Nass
Pour : Laimis Juzeliūnas
Copie à : ce
what is the best location to get tcmu-runner rpms from? (Does not seem to be
available in el9)
create ceph-gw-1 10.172.19.21 skipchecks=true
returns:
The first gateway defined must be the local machine
I can only put here full domainname not even only the hostname, does not seem
to match the
Hi Laimis,
Might be the result of osd_scrub_chunk_max now being 15 instead of 25
previously. See [1] and [2].
Cheers,
Frédéric.
[1] https://tracker.ceph.com/issues/68057
[2]
https://github.com/ceph/ceph/pull/59791/commits/0841603023ba53923a986f2fb96ab7105630c9d3
- Le 26 Nov 24, à 23:36, L
Does it make sense to try and see if I can connect macos client to an rbd
device, or is this never going to be a stable supported environment? Are people
doing this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ce
Hi,
As far as your previous email is concerned, MDS could not find the session
for the client(s) from the sessionmap. This is a bit weird because normally
there would always be a session but it's fine since it's trying to close a
session which is already closed so it's just ignoring and moving ahe
- Le 27 Nov 24, à 10:19, Igor Fedotov a écrit :
> Hi Istvan,
> first of all let me make a remark that we don't know why BlueStore is out of
> space at John's cluster.
> It's just an unconfirmed hypothesis from Frederic that it's caused by high
> fragmentation and BlueFS'es inability to use
On 27/11/24 13:48, Marc wrote:
> How should I rewrite this to ceph.conf
>
> ceph config set mon mon_warn_on_insecure_global_id_reclaim false
> ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
The way to do it would be
ceph config set mon mon_warn_on_insecure_global_id_recl
How should I rewrite this to ceph.conf
ceph config set mon mon_warn_on_insecure_global_id_reclaim false
ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an em
Hi,
right now the cluster is doing recovery for the last two weeks and it seems
it will be doing so for the next week or so also.
Meanwhile a new quincy update came, which fixes some of the things for us
but we would need to upgrade to AlmaLinux 9.
Has anyone done maintainace or upgrade of nodes
>
> Don't laugh. I am experimenting with Ceph in an enthusiast,
Everyone has a smile on their face when working with ceph! ;)
>
> Seriously, I think that, with just a little bit of polishing and
> automation, Ceph could be deployed in the small-office/home-office
> setting. Don't laugh. Thi
Istvan,
Unfortunately there is no such a formula.
It completely depends on allocation/release pattern happened to disk.
Which in turn depends on how clients performed object writes/removals.
My general observation is that the issue tends to happen on small drives
and/or very high space uti
Hi Istvan,
first of all let me make a remark that we don't know why BlueStore is
out of space at John's cluster.
It's just an unconfirmed hypothesis from Frederic that it's caused by
high fragmentation and BlueFS'es inability to use chunks smaller than
64K. In fact fragmentation issue is fix
Yep!
But better try with a single OSD first.
On 26.11.2024 20:48, John Jasen wrote:
Let me see if I have the approach right'ish:
scrounge some more disk for the servers with full/down OSDs.
partition the new disks into LVs for each downed OSD.
Attach as a lvm new-db to the downed OSDs.
Restar
30 matches
Mail list logo