[ceph-users] Re: ceph-crash NOT reporting crashes due to wrong permissions on /var/lib/ceph/crash/posted (Debian / Ubuntu packages)

2024-03-04 Thread Eneko Lacunza
Hi, El 2/3/24 a las 18:00, Tyler Stachecki escribió: On 23.02.24 16:18, Christian Rohmann wrote: I just noticed issues with ceph-crash using the Debian /Ubuntu packages (package: ceph-base): While the /var/lib/ceph/crash/posted folder is created by the package install, it's not properly chowne

[ceph-users] Re: Performance improvement suggestion

2024-03-04 Thread Frank Schilder
Hi all, coming late to the party but want to ship in as well with some experience. The problem of tail latencies of individual OSDs is a real pain for any redundant storage system. However, there is a way to deal with this in an elegant way when using large replication factors. The idea is to u

[ceph-users] OSDs not balanced

2024-03-04 Thread Ml Ml
Hello, i wonder why my autobalancer is not working here: root@ceph01:~# ceph -s cluster: id: 5436dd5d-83d4-4dc8-a93b-60ab5db145df health: HEALTH_ERR 1 backfillfull osd(s) 1 full osd(s) 1 nearfull osd(s) 4 pool(s) full => osd.17 was to

[ceph-users] Re: OSDs not balanced

2024-03-04 Thread Janne Johansson
Den mån 4 mars 2024 kl 11:30 skrev Ml Ml : > > Hello, > > i wonder why my autobalancer is not working here: I think the short answer is "because you have so wildly varying sizes both for drives and hosts". If your drive sizes span from 0.5 to 9.5, there will naturally be skewed data, and it is no

[ceph-users] Re: Performance improvement suggestion

2024-03-04 Thread Marc
> > Fast write enabled would mean that the primary OSD sends #size copies to the > entire active set (including itself) in parallel and sends an ACK to the > client as soon as min_size ACKs have been received from the peers (including > itself). In this way, one can tolerate (size-min_size) slow(e

[ceph-users] Re: Performance improvement suggestion

2024-03-04 Thread Maged Mokhtar
On 04/03/2024 13:35, Marc wrote: Fast write enabled would mean that the primary OSD sends #size copies to the entire active set (including itself) in parallel and sends an ACK to the client as soon as min_size ACKs have been received from the peers (including itself). In this way, one can toler

[ceph-users] [Quincy] cannot configure dashboard to listen on all ports

2024-03-04 Thread wodel youchi
Hi, ceph dashboard fails to listen on all IPs. log_channel(cluster) log [ERR] : Unhandled exception from module 'dashboard' while running on mgr.controllera: OSError("No socket could be created -- (('0.0.0.0', 8443): [Errno -2] Name or service not known) -- (('::', 8443, 0, 0): ceph version 17.2

[ceph-users] Re: Performance improvement suggestion

2024-03-04 Thread Frank Schilder
>>> Fast write enabled would mean that the primary OSD sends #size copies to the >>> entire active set (including itself) in parallel and sends an ACK to the >>> client as soon as min_size ACKs have been received from the peers (including >>> itself). In this way, one can tolerate (size-min_size) s

[ceph-users] Re: Performance improvement suggestion

2024-03-04 Thread Maged Mokhtar
On 04/03/2024 15:37, Frank Schilder wrote: Fast write enabled would mean that the primary OSD sends #size copies to the entire active set (including itself) in parallel and sends an ACK to the client as soon as min_size ACKs have been received from the peers (including itself). In this way, one

[ceph-users] v16.2.15 Pacific released

2024-03-04 Thread Yuri Weinstein
We're happy to announce the 15th, and expected to be the last, backport release in the Pacific series. https://ceph.io/en/news/blog/2024/v16-2-15-pacific-released/ Notable Changes --- * `ceph config dump --format ` output will display the localized option names instead of their nor

[ceph-users] Re: v16.2.15 Pacific released

2024-03-04 Thread Zakhar Kirpichenko
This is great news! Many thanks! /Z On Mon, 4 Mar 2024 at 17:25, Yuri Weinstein wrote: > We're happy to announce the 15th, and expected to be the last, > backport release in the Pacific series. > > https://ceph.io/en/news/blog/2024/v16-2-15-pacific-released/ > > Notable Changes > --

[ceph-users] Re: OSDs not balanced

2024-03-04 Thread Anthony D'Atri
> I think the short answer is "because you have so wildly varying sizes > both for drives and hosts". Arguably OP's OSDs *are* balanced in that their PGs are roughly in line with their sizes, but indeed the size disparity is problematic in some ways. Notably, the 500GB OSD should just be remov

[ceph-users] Re: Performance improvement suggestion

2024-03-04 Thread Mark Nelson
On 3/4/24 08:40, Maged Mokhtar wrote: On 04/03/2024 15:37, Frank Schilder wrote: Fast write enabled would mean that the primary OSD sends #size copies to the entire active set (including itself) in parallel and sends an ACK to the client as soon as min_size ACKs have been received from the pe

[ceph-users] Re: OSDs not balanced

2024-03-04 Thread Cedric
Did the balancer has enabled pools ? "ceph balancer pool ls" Actually I am wondering if the balancer do something when no pools are added. On Mon, Mar 4, 2024, 11:30 Ml Ml wrote: > Hello, > > i wonder why my autobalancer is not working here: > > root@ceph01:~# ceph -s > cluster: > id:

[ceph-users] Re: OSDs not balanced

2024-03-04 Thread Joshua Baergen
The balancer will operate on all pools unless otherwise specified. Josh On Mon, Mar 4, 2024 at 1:12 PM Cedric wrote: > > Did the balancer has enabled pools ? "ceph balancer pool ls" > > Actually I am wondering if the balancer do something when no pools are > added. > > > > On Mon, Mar 4, 2024, 1

[ceph-users] debian-reef_OLD?

2024-03-04 Thread Daniel Brown
I likely missed an announcement, and if so, please forgive me. I’m seeing some failure for when running apt on a cluster of ubuntu machines — looks like a directory has changed on https://download.ceph.com/ Was: debian-reef/ Now appears to be: debian-reef_OLD/ Was reef pulled? ___

[ceph-users] [RGW] Restrict a subuser to access only one specific bucket

2024-03-04 Thread Huy Nguyen
Hi community, I have a user that owns some buckets. I want to create a subuser that has permission to access only one bucket. What can I do to archive this? Thanks ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-us

[ceph-users] Upgraded 16.2.14 to 16.2.15

2024-03-04 Thread Zakhar Kirpichenko
Hi, I have upgraded my test and production cephadm-managed clusters from 16.2.14 to 16.2.15. The upgrade was smooth and completed without issues. There were a few things which I noticed after each upgrade: 1. RocksDB options, which I provided to each mon via their configuration files, got overwri

[ceph-users] Help with deep scrub warnings

2024-03-04 Thread Nicola Mori
Dear Ceph users, in order to reduce the deep scrub load on my cluster I set the deep scrub interval to 2 weeks, and tuned other parameters as follows: # ceph config get osd osd_deep_scrub_interval 1209600.00 # ceph config get osd osd_scrub_sleep 0.10 # ceph config get osd osd_scrub_loa

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-04 Thread Eugen Block
Hi, 1. RocksDB options, which I provided to each mon via their configuration files, got overwritten during mon redeployment and I had to re-add mon_rocksdb_options back. IIRC, you didn't use the extra_entrypoint_args for that option but added it directly to the container unit.run file. So it