[ceph-users] cephadm for Ubuntu 24.04

2024-07-10 Thread Stefan Kooman
Hi, Is it possible to only build "cephadm", so not the other ceph packages / daemons? Or can we think about a way to have cephadm packages build for all supported mainstream linux releases during the supported lifetime of a Ceph release: i.e. debian, Ubuntu LTS, CentOS Stream? I went ahead a

[ceph-users] Re: IT Consulting Firms with Ceph and Hashicorp Vault expertise?

2024-07-10 Thread Alvaro Soto
Oh lol, I do know an IT firm, see https://sentinel.la On Wed, Jul 10, 2024, 2:12 PM Alvaro Soto wrote: > Hi Michael, > I don't know IT firms, but I do know people hehe. You can ping @Araya > cc/ to this thread. > > Cheers! > > On Wed, Jul 10, 2024 at 10:55 AM Michael Worsham < > mwors...@data

[ceph-users] Re: Large omap in index pool even if properly sharded and not "OVER"

2024-07-10 Thread Casey Bodley
On Wed, Jul 10, 2024 at 6:23 PM Richard Bade wrote: > > Hi Casey, > Thanks for that info on the bilog. I'm in a similar situation with > large omap objects and we have also had to reshard buckets on > multisite losing the index on the secondary. > We also now have a lot of buckets with sync disabl

[ceph-users] Re: Large omap in index pool even if properly sharded and not "OVER"

2024-07-10 Thread Richard Bade
Hi Casey, Thanks for that info on the bilog. I'm in a similar situation with large omap objects and we have also had to reshard buckets on multisite losing the index on the secondary. We also now have a lot of buckets with sync disable so I wanted to check that it's always safe to trim the bilog on

[ceph-users] High RAM usage for OSDs

2024-07-10 Thread Work Ceph
Hello guys, We are running Ceph Octopus on Ubuntu 18.04. We noticed that some OSDs are using more than 16GiB of RAM. However, the option "osd_memory_target" is set to 4GiB. The OSDs are SSDs and have 2TiB in size each. Have you guys seen such behavior? Are we missing some other configuration or p

[ceph-users] use of db_slots in DriveGroup specification?

2024-07-10 Thread Robert Sander
Hi, what is the purpose of the db_slots attribute in a DriveGroup specification? My interpretation of the documentation is that I can define how many OSDs use one db device. https://docs.ceph.com/en/reef/cephadm/services/osd/#additional-options "db_slots - How many OSDs per DB device" The d

[ceph-users] Re: IT Consulting Firms with Ceph and Hashicorp Vault expertise?

2024-07-10 Thread Alvaro Soto
Hi Michael, I don't know IT firms, but I do know people hehe. You can ping @Araya cc/ to this thread. Cheers! On Wed, Jul 10, 2024 at 10:55 AM Michael Worsham < mwors...@datadimensions.com> wrote: > I am in need of a list of IT consulting firms that can set up > high-availability Vault and also

[ceph-users] IT Consulting Firms with Ceph and Hashicorp Vault expertise?

2024-07-10 Thread Michael Worsham
I am in need of a list of IT consulting firms that can set up high-availability Vault and also configure the Ceph Object Gateway to use SSE-S3 with Vault. -- Michael This message and its attachments are from Data Dimensions and are intended only for the use of the individual or entity to which

[ceph-users] Re: reef 18.2.3 QE validation status

2024-07-10 Thread Neha Ojha
On Wed, Jul 10, 2024 at 6:58 AM Yuri Weinstein wrote: > We built a new branch with all the cherry-picks on top > (https://pad.ceph.com/p/release-cherry-pick-coordination). > > I am rerunning fs:upgrade: > > https://pulpito.ceph.com/yuriw-2024-07-10_13:47:23-fs:upgrade-reef-release-distro-default-

[ceph-users] [RGW][Lifecycle][Versioned Buckets][Reef] Although LC deletes non-current

2024-07-10 Thread Oguzhan Ozmen (BLOOMBERG/ 120 PARK)
This is similar to an old thread https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/FXVEWDU6NYCGEY5QB6IGGQXTUEZAQKNY/ but I don't see any responses there so opening this one. PROBLEM DESCRIPTION * Issue is seen on versioned buckets. * Using extended logging (debug level 5), we can

[ceph-users] Re: [EXTERN] ceph_fill_inode BAD symlink

2024-07-10 Thread Dietmar Rieder
Hi Alwin, On 7/10/24 15:55, Alwin Antreich wrote: Hi Dietmar, On Wed, 10 Jul 2024 at 11:22, Dietmar Rieder > wrote: Hi, Am 9. Juli 2024 13:37:58 MESZ schrieb Dietmar Rieder mailto:dietmar.rie...@i-med.ac.at>>: >Hi, > >we noticed

[ceph-users] Re: [EXTERN] ceph_fill_inode BAD symlink

2024-07-10 Thread Dietmar Rieder
Hi, On 7/10/24 12:07, Loïc Tortay wrote: On 10/07/2024 11:20, Dietmar Rieder wrote: Hi, Am 9. Juli 2024 13:37:58 MESZ schrieb Dietmar Rieder : Hi, we noticed the following ceph errors in the kernel messages (dmesg -T): [Tue Jul  9 11:59:24 2024] ceph: ceph_fill_inode 10003683698.

[ceph-users] Re: reef 18.2.3 QE validation status

2024-07-10 Thread Yuri Weinstein
We built a new branch with all the cherry-picks on top (https://pad.ceph.com/p/release-cherry-pick-coordination). I am rerunning fs:upgrade: https://pulpito.ceph.com/yuriw-2024-07-10_13:47:23-fs:upgrade-reef-release-distro-default-smithi/ Venky, pls review it after it's done. Neha, do you want t

[ceph-users] Re: [EXTERN] ceph_fill_inode BAD symlink

2024-07-10 Thread Alwin Antreich
Hi Dietmar, On Wed, 10 Jul 2024 at 11:22, Dietmar Rieder wrote: > Hi, > > Am 9. Juli 2024 13:37:58 MESZ schrieb Dietmar Rieder < > dietmar.rie...@i-med.ac.at>: > >Hi, > > > >we noticed the following ceph errors in the kernel messages (dmesg -T): > > > >[Tue Jul 9 11:59:24 2024] ceph: ceph_fill

[ceph-users] Re: Large omap in index pool even if properly sharded and not "OVER"

2024-07-10 Thread Casey Bodley
On Tue, Jul 9, 2024 at 12:41 PM Szabo, Istvan (Agoda) wrote: > > Hi Casey, > > 1. > Regarding versioning, the user doesn't use verisoning it if I'm not mistaken: > https://gist.githubusercontent.com/Badb0yBadb0y/d80c1bdb8609088970413969826d2b7d/raw/baee46865178fff454c224040525b55b54e27218/gistfile

[ceph-users] Re: [EXTERN] ceph_fill_inode BAD symlink

2024-07-10 Thread Dietmar Rieder
Hi, Am 9. Juli 2024 13:37:58 MESZ schrieb Dietmar Rieder : >Hi, > >we noticed the following ceph errors in the kernel messages (dmesg -T): > >[Tue Jul 9 11:59:24 2024] ceph: ceph_fill_inode 10003683698.fffe >BAD symlink size 0 > >Is this something that we should be worried about? >

[ceph-users] Re: CephFS MDS crashing during replay with standby MDSes crashing afterwards

2024-07-10 Thread Ivan Clayson
Hi Tim, Alma8's active support ended in May this year and henceforth there are only security updates. But you make a good point and we are moving toward Alma9 very shortly! Whilst we're mentioning distributions, we've had quite a good experience with Alma (notwithstanding our current but unr

[ceph-users] Re: [EXTERN] Urgent help with degraded filesystem needed

2024-07-10 Thread Stefan Kooman
Hi, On 01-07-2024 10:34, Stefan Kooman wrote: Not that I know of. But changes in behavior of Ceph (daemons) and or Ceph kernels would be good to know about indeed. I follow the ceph-kernel mailing list to see what is going on with the development of kernel CephFS. And there is a thread abo