[ceph-users] Re: EC pool only for hdd

2024-11-27 Thread Eugen Block
Of course it's possible. You can either change this rule by extracting the crushmap, decompiling it, editing the "take" section, compile it and inject it back into the cluster. Or you simply create a new rule with the class hdd specified and set this new rule for your pools. So the first ap

[ceph-users] internal communication network

2024-11-27 Thread Michel Niyoyita
Hello team , I am creating new cluster which will be created using CEPHADM, I will use 192.168.1.0/24 as public network and 10.10.90.0/24 as internal network for osd , mon communication . would like if this command is helpful as it is my first time to use caphadm . sudo cephadm bootstrap --mon-ip

[ceph-users] Re: Squid: deep scrub issues

2024-11-27 Thread Frédéric Nass
Hi Laimis, I apologize for not paying attention to the Reddit link/discussion in your previous message. Forget about osd_scrub_chunk_max. It's very unlikely to explain why scrubbing is so slow that it doesn't progress (if at all) for many v19.2 users. Given the number of testimonies and recent

[ceph-users] testing with tcmu-runner vs rbd map

2024-11-27 Thread Marc
I know this is not a good test, but when I dd to a rbd image like this [@ ~]# dd if=/dev/zero of=/dev/rbd0 bs=1M count=100 status=progress 100+0 records in 100+0 records out 104857600 bytes (105 MB, 100 MiB) copied, 0.399629 s, 262 MB/s It is a big cache difference doing this via the tcmu-runn

[ceph-users] Re: Ceph Nautilus packages for ubuntu 20.04

2024-11-27 Thread Anthony D'Atri
There aren’t Nautilus packages for those releases, AFAIKT? https://discourse.ubuntu.com/t/supported-ceph-versions/18799 They seem to have jumped over both Luminous and Mimic to Octopus. Upstream tends to advise not updating Ceph more than two major releases in one step, so the OP’s question

[ceph-users] Re: Ceph Nautilus packages for ubuntu 20.04

2024-11-27 Thread Sarunas Burdulis
On 2024-11-27 16:54, Pardhiv Karri wrote: Hi, I am in a tricky situation. Our current OSD nodes (luminous version) are on the latest Dell servers, which only support Ubuntu 20.04. What do you mean “only support Ubuntu 20.04”? Just upgrade to 22.04 and then to 24.04. -- Sarunas Burdulis Dart

[ceph-users] Ceph Nautilus packages for ubuntu 20.04

2024-11-27 Thread Pardhiv Karri
Hi, I am in a tricky situation. Our current OSD nodes (luminous version) are on the latest Dell servers, which only support Ubuntu 20.04. The Luminous packages were installed on 16.04, so the packages are still Xenial; I later upgraded the OS to 20.04 and added OSDs to the cluster. Now, I am tryin

[ceph-users] Re: EC pool only for hdd

2024-11-27 Thread Anthony D'Atri
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Balancer: Unable to find further optimization

2024-11-27 Thread Anthony D'Atri
In your situation the JJ Balancer might help. > > On 2024-11-27 17:53, Anthony D'Atri wrote: >>> Hi, >>> My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges from >>> about 50 up to 100 PG's per OSD. This is far from balanced. >> Do you have multiple CRUSH roots or device classes

[ceph-users] Re: ceph cluster planning size / disks

2024-11-27 Thread Matthew Darwin
We use rclone here exclusively. (previously used to use mc) On 2024-11-15 22:45, Orange, Gregory (Pawsey, Kensington WA) wrote: We have a lingering fondness for Minio's mc client mc, and previously recommended it to users of our RGW clusters. In certain uses however performance was much poorer t

[ceph-users] Re: Balancer: Unable to find further optimization

2024-11-27 Thread sinan
So the balancer is working as expected; it is normal that it does or cannot further balance? Any other suggestions here? On 2024-11-27 18:05, Anthony D'Atri wrote: In your situation the JJ Balancer might help. On 2024-11-27 17:53, Anthony D'Atri wrote: Hi, My Ceph cluster is out-of-balance

[ceph-users] EC pool only for hdd

2024-11-27 Thread Rok Jaklič
Hi, is it possible to set/change following already used rule to only use hdd? { "rule_id": 1, "rule_name": "ec32", "type": 3, "steps": [ { "op": "set_chooseleaf_tries", "num": 5 }, { "op": "set_choose_tries", "

[ceph-users] Re: Balancer: Unable to find further optimization

2024-11-27 Thread sinan
On 2024-11-27 17:53, Anthony D'Atri wrote: Hi, My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges from about 50 up to 100 PG's per OSD. This is far from balanced. Do you have multiple CRUSH roots or device classes? Are all OSDs the same weight? Yes, I have 2 CRUSH roots

[ceph-users] Re: Balancer: Unable to find further optimization

2024-11-27 Thread Anthony D'Atri
> > Hi, > > My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges from > about 50 up to 100 PG's per OSD. This is far from balanced. Do you have multiple CRUSH roots or device classes? Are all OSDs the same weight? > My disk sizes differs from 1.6T up to 2.4T. Ah. The numb

[ceph-users] Re: Squid: deep scrub issues

2024-11-27 Thread Anthony D'Atri
Do you have osd_scrub_begin_hour / osd_scrub_end_hour set? Constraining times when scrubs can run can result in them piling up. Are you saying that an individual PG may take 20+ elapsed days to perform a deep scrub? > Might be the result of osd_scrub_chunk_max now being 15 instead of 25 > p

[ceph-users] Balancer: Unable to find further optimization

2024-11-27 Thread sinan
Hi, My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges from about 50 up to 100 PG's per OSD. This is far from balanced. Today, I have enabled the balancer module. But unfortunately, it doesn't want to balance: "Unable to find further optimization, or pool(s) pg_num is decrea

[ceph-users] iscsi-ceph

2024-11-27 Thread Marc
Failed: Clients can not be defined until a HA configuration has been defined (>2 gateways) Who cares when I am testing I am fully aware I only entered 1. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users

[ceph-users] Fwd: Re: Squid: deep scrub issues

2024-11-27 Thread Michel Jouvin
Pour info, meme si pas en rapport avec nos pbs je pense puisqu'on tourne v18... Michel Message transféré Sujet : [ceph-users] Re: Squid: deep scrub issues Date : Wed, 27 Nov 2024 17:15:32 +0100 (CET) De :Frédéric Nass Pour : Laimis Juzeliūnas Copie à : ce

[ceph-users] iscsi testing

2024-11-27 Thread Marc
what is the best location to get tcmu-runner rpms from? (Does not seem to be available in el9) create ceph-gw-1 10.172.19.21 skipchecks=true returns: The first gateway defined must be the local machine I can only put here full domainname not even only the hostname, does not seem to match the

[ceph-users] Re: Squid: deep scrub issues

2024-11-27 Thread Frédéric Nass
Hi Laimis, Might be the result of osd_scrub_chunk_max now being 15 instead of 25 previously. See [1] and [2]. Cheers, Frédéric. [1] https://tracker.ceph.com/issues/68057 [2] https://github.com/ceph/ceph/pull/59791/commits/0841603023ba53923a986f2fb96ab7105630c9d3 - Le 26 Nov 24, à 23:36, L

[ceph-users] macos rbd client

2024-11-27 Thread Marc
Does it make sense to try and see if I can connect macos client to an rbd device, or is this never going to be a stable supported environment? Are people doing this? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ce

[ceph-users] Re: CephFS 16.2.10 problem

2024-11-27 Thread Dhairya Parmar
Hi, As far as your previous email is concerned, MDS could not find the session for the client(s) from the sessionmap. This is a bit weird because normally there would always be a session but it's fine since it's trying to close a session which is already closed so it's just ignoring and moving ahe

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-27 Thread Frédéric Nass
- Le 27 Nov 24, à 10:19, Igor Fedotov a écrit : > Hi Istvan, > first of all let me make a remark that we don't know why BlueStore is out of > space at John's cluster. > It's just an unconfirmed hypothesis from Frederic that it's caused by high > fragmentation and BlueFS'es inability to use

[ceph-users] Re: config set -> ceph.conf

2024-11-27 Thread Gregory Orange
On 27/11/24 13:48, Marc wrote: > How should I rewrite this to ceph.conf > > ceph config set mon mon_warn_on_insecure_global_id_reclaim false > ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false The way to do it would be ceph config set mon mon_warn_on_insecure_global_id_recl

[ceph-users] config set -> ceph.conf

2024-11-27 Thread Marc
How should I rewrite this to ceph.conf ceph config set mon mon_warn_on_insecure_global_id_reclaim false ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an em

[ceph-users] Upgrade of OS and ceph during recovery

2024-11-27 Thread Rok Jaklič
Hi, right now the cluster is doing recovery for the last two weeks and it seems it will be doing so for the next week or so also. Meanwhile a new quincy update came, which fixes some of the things for us but we would need to upgrade to AlmaLinux 9. Has anyone done maintainace or upgrade of nodes

[ceph-users] Re: CephFS empty files in a Frankenstein system

2024-11-27 Thread Marc
> > Don't laugh. I am experimenting with Ceph in an enthusiast, Everyone has a smile on their face when working with ceph! ;) > > Seriously, I think that, with just a little bit of polishing and > automation, Ceph could be deployed in the small-office/home-office > setting. Don't laugh. Thi

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-27 Thread Igor Fedotov
Istvan, Unfortunately there is no such a formula. It completely depends on allocation/release pattern happened to disk. Which in turn depends on how clients performed object writes/removals. My general observation is that the issue tends to happen on small drives and/or very high space uti

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-27 Thread Igor Fedotov
Hi Istvan, first of all let me make a remark that we don't know why BlueStore is out of space at John's cluster. It's just an unconfirmed hypothesis from Frederic that it's caused by high fragmentation and BlueFS'es inability to use chunks smaller than 64K. In fact fragmentation issue is fix

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-27 Thread Igor Fedotov
Yep! But better try with a single OSD first. On 26.11.2024 20:48, John Jasen wrote: Let me see if I have the approach right'ish: scrounge some more disk for the servers with full/down OSDs. partition the new disks into LVs for each downed OSD. Attach as a lvm new-db to the downed OSDs. Restar