[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Venky Shankar
+Patrick Donnelly On Tue, Mar 5, 2024 at 9:18 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/64721#note-1 > Release Notes - TBD > LRC upgrade - TBD > > Seeking approvals/reviews for: > > smoke - in progress > rados - Radek, Laura? > q

[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Venky Shankar
Hi Laura, On Wed, Mar 6, 2024 at 4:53 AM Laura Flores wrote: > Here are the rados and smoke suite summaries. > > @Radoslaw Zarzynski , @Adam King > , @Nizamudeen A , mind having a look to ensure the > results from the rados suite look good to you? > > @Venky Shankar mind having a look at the

[ceph-users] Re: Ceph Cluster Config File Locations?

2024-03-05 Thread Eugen Block
Hi, I've checked, checked, and checked again that the individual config files all point towards the correct ip subnet for the monitors, and I cannot find any trace of the old subnet's ip address in any config file (that I can find). what are those "individual config files"? The ones under

[ceph-users] Re: Monitoring Ceph Bucket and overall ceph cluster remaining space

2024-03-05 Thread Konstantin Shalygin
Hi, Don't aware about what is SW, but if this software works with Prometheus metrics format - why not. Anyway the exporters are open source, you can modify the existing code for your environment k Sent from my iPhone > On 6 Mar 2024, at 07:58, Michael Worsham wrote: > > This looks interest

[ceph-users] Re: How to build ceph without QAT?

2024-03-05 Thread Feng, Hualong
Hi Dongchuan Could I know which version or which commit that you are building and your environment: system, CPU, kernel? ./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo this command should be OK without QAT. Thanks -Hualong > -Original Message- > From: 张东川 > Sent: Wednesday, March

[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Nizamudeen A
for dashboard, I see 1 failure, 2 dead and 2 passed jobs. The failed e2e is something we fixed a while ago. not sure why it's broken again. but if it's recurring, we'll have a look. In any case it's not a blocker. On Wed, Mar 6, 2024 at 4:53 AM Laura Flores wrote: > Here are the rados and smoke

[ceph-users] Re: Monitoring Ceph Bucket and overall ceph cluster remaining space

2024-03-05 Thread Michael Worsham
This looks interesting, but instead of Prometheus, could the data be exported for SolarWinds? The intent is to have SW watch the available storage space allocated and then to alert when a certain threshold is reached (75% remaining for a warning; 95% remaining for a critical). -- Michael _

[ceph-users] Re: Monitoring Ceph Bucket and overall ceph cluster remaining space

2024-03-05 Thread Konstantin Shalygin
Hi, For RGW usage statistics you can use radosgw_usage_exporter [1] k [1] https://github.com/blemmenes/radosgw_usage_exporter Sent from my iPhone > On 6 Mar 2024, at 00:21, Michael Worsham wrote: > Is there an easy way to poll the ceph cluster buckets in a way to see how > much space is re

[ceph-users] How to build ceph without QAT?

2024-03-05 Thread 张东川
Hi guys, I tried both following commands. Neither of them worked. "./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo -DWITH_QAT=OFF -DWITH_QATDRV=OFF -DWITH_QATZIP=OFF" "ARGS="-DWITH_QAT=OFF -DWITH_QATDRV=OFF -DWITH_QATZIP=OFF" ./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo" I still see erro

[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Laura Flores
Here are the rados and smoke suite summaries. @Radoslaw Zarzynski , @Adam King , @Nizamudeen A , mind having a look to ensure the results from the rados suite look good to you? @Venky Shankar mind having a look at the smoke suite? There was a resurgence of https://tracker.ceph.com/issues/57206.

[ceph-users] Re: debian-reef_OLD?

2024-03-05 Thread Daniel Brown
Thank you! > On Mar 5, 2024, at 3:55 PM, Laura Flores wrote: > > Hi all, > > The issue should be fixed, but please let us know if anything is still > amiss. > > Thanks, > Laura > > On Tue, Mar 5, 2024 at 9:59 AM Reed Dier wrote: > >> Given that both the debian and rpm paths have been ap

[ceph-users] Monitoring Ceph Bucket and overall ceph cluster remaining space

2024-03-05 Thread Michael Worsham
Is there an easy way to poll the ceph cluster buckets in a way to see how much space is remaining? And is it possible to see how much ceph cluster space is remaining overall? I am trying to extract the data from our Ceph cluster and put it into a format that our SolarWinds can understand in who

[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Yuri Weinstein
Only suits below need approval: smoke - Radek, Laura? rados - Radek, Laura? We are also in the process of upgrading gibba and then LRC into 18.2.2 RC On Tue, Mar 5, 2024 at 7:47 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/64721#n

[ceph-users] Re: debian-reef_OLD?

2024-03-05 Thread Laura Flores
Hi all, The issue should be fixed, but please let us know if anything is still amiss. Thanks, Laura On Tue, Mar 5, 2024 at 9:59 AM Reed Dier wrote: > Given that both the debian and rpm paths have been appended with _OLD, and > this more recent post about 18.2.2 (hot-fix), it sounds like there

[ceph-users] Re: Number of pgs

2024-03-05 Thread Mark Nelson
There are both pros and cons to having more PGs. Here are a couple of considerations: Pros: 1) Better data distribution prior to balancing (and maybe after) 2) Fewer objects/data per PG 3) Lower per-PG lock contention Cons: 1) Higher PG log memory usage until you hit the osd target unless you

[ceph-users] Re: Number of pgs

2024-03-05 Thread Nikolaos Dandoulakis
Hi Anthony, Thank you very much for your input. It is a mixture of HDDs and a few NVMe drives. The sizes of the HDDs vary between 8-18TB and `ceph osd df` reports 23-25 pgs for the small drives 50-55 for the bigger ones. Considering that the cluster is working fine, what would be the benefit

[ceph-users] Re: Number of pgs

2024-03-05 Thread Anthony D'Atri
If you only have one pool of significant size, then your PG ratio is around 40 . IMHO too low. If you're using HDDs I personally might set to 8192 ; if using NVMe SSDS arguably 16384 -- assuming that your OSD sizes are more or less close to each other. `ceph osd df` will show toward the righ

[ceph-users] Re: Number of pgs

2024-03-05 Thread Nikolaos Dandoulakis
Hi Anthony, I should have said, it’s replicated (3) Best, Nick Sent from my phone, apologies for any typos! From: Anthony D'Atri Sent: Tuesday, March 5, 2024 7:22:42 PM To: Nikolaos Dandoulakis Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Number of pgs Th

[ceph-users] Re: Number of pgs

2024-03-05 Thread Anthony D'Atri
Replicated or EC? > On Mar 5, 2024, at 14:09, Nikolaos Dandoulakis wrote: > > Hi all, > > Pretty sure not the first time you see a thread like this. > > Our cluster consists of 12 nodes/153 OSDs/1.2 PiB used, 708 TiB /1.9 PiB avail > > The data pool is 2048 pgs big exactly the same number as

[ceph-users] Number of pgs

2024-03-05 Thread Nikolaos Dandoulakis
Hi all, Pretty sure not the first time you see a thread like this. Our cluster consists of 12 nodes/153 OSDs/1.2 PiB used, 708 TiB /1.9 PiB avail The data pool is 2048 pgs big exactly the same number as when the cluster started. We have no issues with the cluster, everything runs as expected an

[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Travis Nielsen
Looks great to me, Redo has tested this thoroughly. Thanks! Travis On Tue, Mar 5, 2024 at 8:48 AM Yuri Weinstein wrote: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/64721#note-1 > Release Notes - TBD > LRC upgrade - TBD > > Seeking approvals/reviews for: >

[ceph-users] Re: debian-reef_OLD?

2024-03-05 Thread Reed Dier
Given that both the debian and rpm paths have been appended with _OLD, and this more recent post about 18.2.2 (hot-fix), it sounds like there is some sort of issue with 18.2.1? https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LEYDHWAPZW7KOGH2OH4TOPVGAFMZPYYP/

[ceph-users] reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Yuri Weinstein
Details of this release are summarized here: https://tracker.ceph.com/issues/64721#note-1 Release Notes - TBD LRC upgrade - TBD Seeking approvals/reviews for: smoke - in progress rados - Radek, Laura? quincy-x - in progress Also need approval from Travis, Redouane for Prometheus fix testing. __

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Eugen Block
Thanks for chiming in, Adam. Zitat von Adam King : There was a bug with this that was fixed by https://github.com/ceph/ceph/pull/52122 (which also specifically added an integration test for this case). It looks like it's missing a reef and quincy backport though unfortunately. I'll try to open

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Adam King
There was a bug with this that was fixed by https://github.com/ceph/ceph/pull/52122 (which also specifically added an integration test for this case). It looks like it's missing a reef and quincy backport though unfortunately. I'll try to open one for both. On Tue, Mar 5, 2024 at 8:26 AM Eugen Blo

[ceph-users] Re: Help with deep scrub warnings (probably a bug ... set on pool for effect)

2024-03-05 Thread Peter Maloney
I had the same problem as you The only solution that worked for me is to set it on the pools:     for pool in $(ceph osd pool ls); do     ceph osd pool set "$pool" scrub_max_interval "$smaxi"     ceph osd pool set "$pool" scrub_min_interval "$smini"     ceph osd pool set "$pool" d

[ceph-users] Re: Help with deep scrub warnings

2024-03-05 Thread Nicola Mori
Hi Anthony, thanks for the tips. I reset all the values but osd_deep_scrub_interval to their defaults as reported at https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/ : # ceph config set osd osd_scrub_sleep 0.0 # ceph config set osd osd_scrub_load_threshold 0.5 # ceph config

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Eugen Block
It seems to be an issue with the service type (in this case "mon"), it's not entirely "broken", with the node-exporter it works: quincy-1:~ # cat node-exporter.yaml service_type: node-exporter service_name: node-exporter placement:  host_pattern: '*' extra_entrypoint_args:  - "--collector.t

[ceph-users] Re: Uninstall ceph rgw

2024-03-05 Thread Albert Shih
Le 05/03/2024 à 11:54:34+0100, Robert Sander a écrit Hi, > On 3/5/24 11:05, Albert Shih wrote: > > > But I like to clean up and «erase» everything about rgw ? not only to try > > to understand but also because I think I mixted up between realm and > > zonegroup... > > Remove the service with

[ceph-users] Re: Help with deep scrub warnings

2024-03-05 Thread Anthony D'Atri
* Try applying the settings to global so that mons/mgrs get them. * Set your shallow scrub settings back to the default. Shallow scrubs take very few resources * Set your randomize_ratio back to the default, you’re just bunching them up * Set the load threshold back to the default, I can’t ima

[ceph-users] Re: PGs with status active+clean+laggy

2024-03-05 Thread Robert Sander
Hi, On 3/5/24 13:05, ricardom...@soujmv.com wrote: I have a ceph quincy cluster with 5 nodes currently. But only 3 with SSDs. Do not mix HDDs and SSDs in the same pool. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel:

[ceph-users] PGs with status active+clean+laggy

2024-03-05 Thread ricardomori
Dear community, I have a ceph quincy cluster with 5 nodes currently. But only 3 with SSDs. I have had many alerts from PGs with active-clean-laggy status. This has caused problems with slow writing. I wanted to know how to troubleshoot properly. I checked several things related to the network,

[ceph-users] Re: [RGW] Restrict a subuser to access only one specific bucket

2024-03-05 Thread Ondřej Kukla
Hello, As a one solution you can create a bucket policy that would give the “subuser” permissions to access the bucket. Just keep in mind that the second user is not the bucket owner so he will not be able to see the bucket in his bucket list, but when he access the bucket directly it will work

[ceph-users] Re: Ceph storage project for virtualization

2024-03-05 Thread Eneko Lacunza
Hi Egoitz, I don't think it is a good idea, but can't comment about if that's possible because I don't know well enough Ceph's inner workings, maybe others can comment. This is what worries me: " ach NFS redundant service of each datacenter will be composed by two NFS gateways accessing to t

[ceph-users] Re: Uninstall ceph rgw

2024-03-05 Thread Robert Sander
On 3/5/24 11:05, Albert Shih wrote: But I like to clean up and «erase» everything about rgw ? not only to try to understand but also because I think I mixted up between realm and zonegroup... Remove the service with "ceph orch rm …" and then remove all the pools the rgw services has created.

[ceph-users] Re: Ceph storage project for virtualization

2024-03-05 Thread egoitz
Hi Eneko! I don't really have that data but I was planning to have as master OSD only the ones in the same datacenter as the hypervisor using the storage. The other datacenters would be just replicas. I assume you ask it because replication is totally synchronous. Well for doing step by step. I

[ceph-users] Re: Ceph storage project for virtualization

2024-03-05 Thread Eneko Lacunza
Hi Egoitz, What network latency between datacenters? Cheers El 5/3/24 a las 11:31, ego...@ramattack.net escribió: Hi! I have been reading some ebooks of Ceph and some doc and learning about it. The goal of all it, is the fact of creating a rock solid storage por virtual machines. After all th

[ceph-users] Ceph storage project for virtualization

2024-03-05 Thread egoitz
Hi! I have been reading some ebooks of Ceph and some doc and learning about it. The goal of all it, is the fact of creating a rock solid storage por virtual machines. After all the learning I have not been able to answer by myself to this question so I was wondering if perhaps you could clarify m

[ceph-users] Uninstall ceph rgw

2024-03-05 Thread Albert Shih
Hi everyone, I'm currently trying to understand how to deploy rgw, so I test few things but now I'm not sure what's are installed what not. First I try to install according to https://docs.ceph.com/en/quincy/cephadm/services/rgw/ then I see in that page they are https://docs.ceph.com/e

[ceph-users] Ceph Cluster Config File Locations?

2024-03-05 Thread duluxoz
Hi All, I don't know how its happened (bad backup/restore, bad config file somewhere, I don't know) but my (DEV) Ceph Cluster is in a very bad state, and I'm looking for pointers/help in getting it back running (unfortunate, a complete rebuild/restore is *not* an option). This is on Ceph Ree

[ceph-users] Re: debian-reef_OLD?

2024-03-05 Thread Christian Rohmann
On 04.03.24 22:24, Daniel Brown wrote: debian-reef/ Now appears to be: debian-reef_OLD/ Could this have been  some sort of "release script" just messing up the renaming / symlinking to the most recent stable? Regards Christian ___ ceph-users

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Eugen Block
Oh, you're right. I just checked on Quincy as well at it failed with the same error message. For pacific it still works. I'll check for existing tracker issues. Zitat von Robert Sander : Hi, On 3/5/24 08:57, Eugen Block wrote: extra_entrypoint_args:   - '--mon-rocksdb-options=write_buf

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Robert Sander
Hi, On 3/5/24 08:57, Eugen Block wrote: extra_entrypoint_args:   - '--mon-rocksdb-options=write_buffer_size=33554432,compression=kLZ4Compression,level_compaction_dynamic_level_bytes=true,bottommost_compression=kLZ4HCCompression,max_background_jobs=4,max_subcompactions=2' When I try this on

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Zakhar Kirpichenko
Well, that option could be included in new mon configs generated during mon upgrades. But it isn't being used, a minimal config is written instead. I.e. it seems that the configuration option is useless for all intents and purposes, as it doesn't seem to be taken into account at any stage of a mon'

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Eugen Block
Hi, I also added it to the cluster config with "ceph config set mon mon_rocksdb_options", but it seems that this option doesn't have any effect at all. that's because it's an option that has to be present *during* mon startup, not *after* the startup when it can read the config store. Zita

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Zakhar Kirpichenko
Hi Eugen, It is correct that I manually added the configuration, but not to the unit.run but rather to each mon's config (i.e. /var/lib/ceph/FSID/mon.*/config). I also added it to the cluster config with "ceph config set mon mon_rocksdb_options", but it seems that this option doesn't have any effe