[ceph-users] Re: use of db_slots in DriveGroup specification?

2024-07-11 Thread Eugen Block
Hi, apparently, db_slots is still not implemented. I just tried it on a test cluster with 18.2.2: # ceph orch apply -i osd-slots.yaml --dry-run Error EINVAL: Failed to validate OSD spec "osd-hdd-ssd.db_devices": Filtering for `db_slots` is not supported If it was, I would be interested as

[ceph-users] Re: Use of db_slots in DriveGroup specification?

2024-07-11 Thread Robert Sander
Hi, On 7/11/24 09:01, Eugen Block wrote: apparently, db_slots is still not implemented. I just tried it on a test cluster with 18.2.2: I am thinking about a PR to correct the documentation. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinl

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-11 Thread Malte Stroem
Hello Stefan, have a look: https://docs.ceph.com/en/latest/cephadm/install/#curl-based-installation Just download cephadm. It will work on any distro. You do not need any ceph package like ceph-common for example to run cephadm. Best, Malte On 11.07.24 08:17, Stefan Kooman wrote: Hi, Is

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-11 Thread Stefan Kooman
On 11-07-2024 09:55, Malte Stroem wrote: Hello Stefan, have a look: https://docs.ceph.com/en/latest/cephadm/install/#curl-based-installation Yeah, I have read that part. Just download cephadm. It will work on any distro. curl --silent --remote-name --location https://download.ceph.com/r

[ceph-users] Changing ip addr

2024-07-11 Thread Albert Shih
Hi everyone I just change the subnet of my cluster. The cephfs part seem to working well. But I got many error with Jul 11 10:08:35 hostname ceph-*** ts=2024-07-11T08:08:35.364Z caller=refresh.go:99 level=error component="discovery manager notify" discovery=http config=config-0

[ceph-users] Re: Changing ip addr

2024-07-11 Thread Albert Shih
Le 11/07/2024 à 10:27:09+0200, Albert Shih a écrit > Hi everyone > > I just change the subnet of my cluster. > > The cephfs part seem to working well. > > But I got many error with > > Jul 11 10:08:35 hostname ceph-*** ts=2024-07-11T08:08:35.364Z > caller=refresh.go:99 level=err

[ceph-users] Multi site sync details

2024-07-11 Thread Huseyin Cotuk
Hello Cephers, I am wondering whether it is possible to get the number of objects that are not synced yet in a multi site radosgw configuration. "radosgw-admin sync status” gives the number of shards that are behind. Similarly “radosgw-admin bucket sync status —bucket {bucket_name}” also gives

[ceph-users] Re: Changing ip addr

2024-07-11 Thread Eugen Block
Do you see it in 'ceph mgr services'? You might need to change the prometheus config as well and redeploy. Zitat von Albert Shih : Hi everyone I just change the subnet of my cluster. The cephfs part seem to working well. But I got many error with Jul 11 10:08:35 hostname ceph-***

[ceph-users] Re: Changing ip addr

2024-07-11 Thread Albert Shih
Le 11/07/2024 à 08:34:21+, Eugen Block a écrit Hi, Sorry I miss the answer to the list. > Do you see it in 'ceph mgr services'? You might need to change the Yes I did root@cthulhu1:/etc# ceph mgr services { "dashboard": "https://NEW_SUBNET.189.35:8443/";, "prometheus": "http://NE

[ceph-users] Re: Changing ip addr

2024-07-11 Thread Eugen Block
And how about the prometheus.yml? /var/lib/ceph/{fsid}/prometheus.{node}/etc/prometheus/prometheus.yml It contains an IP address as well: alerting: alertmanagers: - scheme: http http_sd_configs: - url: http://{IP}:8765/sd/prometheus/sd-config?service=alertmanager I misread

[ceph-users] Re: Changing ip addr

2024-07-11 Thread Albert Shih
Le 11/07/2024 à 09:00:02+, Eugen Block a écrit Hi, Thanks, but ... nope. > And how about the prometheus.yml? > > /var/lib/ceph/{fsid}/prometheus.{node}/etc/prometheus/prometheus.yml > > It contains an IP address as well: > > alerting: > alertmanagers: > - scheme: http > http_s

[ceph-users] Re: Large omap in index pool even if properly sharded and not "OVER"

2024-07-11 Thread Szabo, Istvan (Agoda)
Hi Casey, So the multisite thing when we resharded the bucket I've completely disabled and removed the bucket from the sync before like, disable, removed the pipe and everything step by step, finally period updated so this is not syncing I'm kind of sure so I think we can focus on the master zo

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-11 Thread John Mulligan
On Thursday, July 11, 2024 4:22:28 AM EDT Stefan Kooman wrote: > On 11-07-2024 09:55, Malte Stroem wrote: > > Hello Stefan, > > > > have a look: > > > > https://docs.ceph.com/en/latest/cephadm/install/#curl-based-installation > > Yeah, I have read that part. > > > Just download cephadm. It will

[ceph-users] [RFC][UADK integration][Acceleration of zlib compressor]

2024-07-11 Thread Rongqi Sun
Hi Ceph community, UADK is an open source accelerator framework, the kernel support part is UACCE , which has been merged in Kernel for several years, targeting to provide Shared Virtual Addressing (SVA) between accelerators and processes. UADK prov

[ceph-users] Repurposing some Dell R750s for Ceph

2024-07-11 Thread Drew Weaver
Hello, We would like to repurpose some Dell PowerEdge R750s for a Ceph cluster. Currently the servers have one H755N RAID controller for each 8 drives. (2 total) I have been asking their technical support what needs to happen in order for us to just rip out those raid controllers and cable the

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread Frank Schilder
Hi Drew, as far as I know Dell's drive bays for RAID controllers are not the same as the drive bays for CPU attached disks. In particular, I don't think they have that config for 3.5" drive bays and your description sounds a lot like that's what you have. Are you trying to go from 16x2.5" HDD t

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread John Jasen
retrofitting the guts of a Dell PE R7xx server is not straightforward. You could be looking into replacing the motherboard, the backplane, and so forth. You can probably convert the H755N card to present the drives to the OS, so you can use them for Ceph. This may be AHCI mode, pass-through mode,

[ceph-users] Re: AssumeRoleWithWebIdentity in RGW with Azure AD

2024-07-11 Thread Ryan Rempel
Thanks! I took a crack at it myself, and have some work-in-progress here: https://github.com/cmu-rgrempel/ceph/pull/1 Feel free to use any of that if you like it. It's working for me, but I've only tested it with Azure AD – I haven't tested the cases that it used to work for. (I believe it doe

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-11 Thread Stefan Kooman
On 11-07-2024 14:20, John Mulligan wrote: On Thursday, July 11, 2024 4:22:28 AM EDT Stefan Kooman wrote: On 11-07-2024 09:55, Malte Stroem wrote: Hello Stefan, have a look: https://docs.ceph.com/en/latest/cephadm/install/#curl-based-installation Yeah, I have read that part. Just download

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread Robin H. Johnson
On Thu, Jul 11, 2024 at 01:16:22PM +, Drew Weaver wrote: > Hello, > > We would like to repurpose some Dell PowerEdge R750s for a Ceph cluster. > > Currently the servers have one H755N RAID controller for each 8 drives. (2 > total) The N variant of H755N specifically? So you have 16 NVME driv

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-11 Thread Konstantin Shalygin
> On 11 Jul 2024, at 15:20, John Mulligan wrote: > > I'll ask to have backport PRs get generated. I'm personally pretty clueless > as > to how to process backports. The how-to described in this doc [1] > Thanks, I hadn't found that one. Added backport for squid release [2], as far as I unde

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread Anthony D'Atri
Agree with everything Robin wrote here. RAID HBAs FTL. Even in passthrough mode, it’s still an [absurdly expensive] point of failure, but a server in the rack is worth two on backorder. Moreover, I’m told that it is possible to retrofit with cables and possibly an AIC mux / expander. e.g. ht

[ceph-users] Help with Mirroring

2024-07-11 Thread Dave Hall
Hello. I would like to use mirroring to facilitate migrating from an existing Nautilus cluster to a new cluster running Reef. RIght now I'm looking at RBD mirroring. I have studied the RBD Mirroring section of the documentation, but it is unclear to me which commands need to be issued on each cl

[ceph-users] July's User + Developer Monthly Meeting

2024-07-11 Thread Noah Lehman
Hi Ceph users, July's User + Developer Monthly meeting will be happening Wednesday, July 24th at 10AM EDT. We look forward to seeing you there! Event details: https://hubs.la/Q02GgVNb0 Best, Noah ___ ceph-users mailing list -- ceph-users@ceph.io To un

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread Drew Weaver
Hi, I'm a bit confused by your question the 'drive bays' or backplane is the same for an NVMe system, it's either a SATA/SAS/NVME backplane or a NVMe backplane. I don't understand why you believe that my configuration has to be 3.5" as it isn't. It's a 16x2.5" chassis with two H755N controllers

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread Drew Weaver
Hi, Isn’t the supported/recommended configuration to use an HBA if you have to but never use a RAID controller? The backplane is already NVMe as the drives installed in the system currently are already NVMe. Also I was looking through some diagrams of the R750 and it appears that if you order

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread Drew Weaver
Hi, >I don't think the motherboard has enough PCIe lanes to natively connect all >the drives: the RAID controller effectively functioned as a expander, so you >needed less PCIe lanes on the motherboard. >As the quickest way forward: look for passthrough / single-disk / RAID0 >options, in that o

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread pe...@boku.net
I’ve replaced R640 drive backplanes (off ebay) to use U.2 NVMe instead of RAID.  Yes, I had to replace the backplane in order to talk to NVMe and in that work it removes exposure to RAID. peter On 7/11/24, 2:25 PM, "Drew Weaver" wrote:Hi, Isn’t the supported/recommended configuration to use an HBA

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-11 Thread Tim Holloway
Just my €.02. There is, in fact a cephadm package for the Raspberry Pi OS. If I read the synopis correctly, it's for ceph 16.2.11, which I think is the same release of Ceph Pacific that I'm presently running my own farm on. It appears to derive off Debian Bookworm. Since cephadm is mainly a prog

[ceph-users] Re: [RFC][UADK integration][Acceleration of zlib compressor]

2024-07-11 Thread Brad Hubbard
On Thu, Jul 11, 2024 at 10:42 PM Rongqi Sun wrote: > > Hi Ceph community, Hi Rongqi, Thanks for proposing this and for attending CDM to discuss it yesterday. I see we have received some good feedback in the PR and it's awaiting some suggested changes. I think this will be a useful and performant

[ceph-users] v19.1.0 Squid RC0 released

2024-07-11 Thread Yuri Weinstein
This is the first release candidate for Squid. Feature highlights: RGW The User Accounts feature unlocks several new AWS-compatible IAM APIs for the self-service management of users, keys, groups, roles, policy and more. RADOS BlueStore has been optimized for better performance in snapshot-i

[ceph-users] Re: Repurposing some Dell R750s for Ceph

2024-07-11 Thread Anthony D'Atri
> > Isn’t the supported/recommended configuration to use an HBA if you have to > but never use a RAID controller? That may be something I added to the docs. My contempt for RAID HBAs knows no bounds ;) Ceph doesn’t care. Passthrough should work fine, I’ve done that for tends of thousands

[ceph-users] Re: Help with Mirroring

2024-07-11 Thread Anthony D'Atri
> > I would like to use mirroring to facilitate migrating from an existing > Nautilus cluster to a new cluster running Reef. RIght now I'm looking at > RBD mirroring. I have studied the RBD Mirroring section of the > documentation, but it is unclear to me which commands need to be issued on > ea

[ceph-users] Re: AssumeRoleWithWebIdentity in RGW with Azure AD

2024-07-11 Thread Pritha Srivastava
This is very helpful, I'll take a look at it. Thanks, Pritha On Thu, Jul 11, 2024 at 8:04 PM Ryan Rempel wrote: > Thanks! > > I took a crack at it myself, and have some work-in-progress here: > > https://github.com/cmu-rgrempel/ceph/pull/1 > > Feel free to use any of that if you like it. It's w

[ceph-users] Re: reef 18.2.3 QE validation status

2024-07-11 Thread Venky Shankar
Hi Yuri, On Wed, Jul 10, 2024 at 7:28 PM Yuri Weinstein wrote: > > We built a new branch with all the cherry-picks on top > (https://pad.ceph.com/p/release-cherry-pick-coordination). > > I am rerunning fs:upgrade: > https://pulpito.ceph.com/yuriw-2024-07-10_13:47:23-fs:upgrade-reef-release-distro

[ceph-users] Re: Help with Mirroring

2024-07-11 Thread Eugen Block
Hi, just one question coming to mind, if you intend to migrate the images separately, is it really necessary to set up mirroring? You could just 'rbd export' on the source cluster and 'rbd import' on the destination cluster. Zitat von Anthony D'Atri : I would like to use mirroring to