[ceph-users] Re: ceph iscsi gateway
You can compile this iscsi stuff not? I did this recently with a recent ceph version. But noticed it was performing less than just a rbd mount. > Okay Gregory, >Bad news for me, I will have to find another way. The truth is that > from the operational and long-term maintenance point of view it is > practically transparent. > This option fitted very well in our system, since the Ceph cluster is > easy to maintain, while for example the iSCSI cabinets have to be > migrated every certain time. > >In our case where the use of block access is not intensive, the truth > is that we were not having a bad performance, > it is also true that until now I had only exported 4 blocks with this > system. > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Radosgw log Custom Headers
Hi folks, i like to make sure that the RadosGW is using the X-Forwarded-For as source-ip for ACL's However i do not find the information in the logs: i have set (using beast): ceph config set global rgw_remote_addr_param http_x_forwarded_for ceph config set global rgw_log_http_headers http_x_forwarded_for https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_log_http_headers I hope someone can point me in the right direction! Thanks, Ansgar ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: 512e -> 4Kn hdd
Yes I can remember testing this in the past, also did not see any difference. > I've played with these settings and didn't notice any benefit. > Tools for changing sector size on HDDs are pretty terrible and it's > really easy to brick your HDD when using incorrectly. > I wouldn't recommend. > > Best regards > Adam Prycki > > W dniu 30.01.2025 o 09:52, Marc pisze: > > I used to have a tool from hgst/wd to 'convert' drives to 4Kn, but > after a mentally challenging interaction with some wd support, I decided > to try and use other solutions mentioned on the internet > > > > this seems to produce output similar to the previously used tool > > sg_format --format --size=4096 /dev/sdd > > > > Anyone have experience with this? This is first time the drives are > already in the nodes, so I don't have a power cycle on them. > > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Radosgw log Custom Headers
Same here, it worked only after rgw service was restarted using this config: rgw_log_http_headers http_x_forwarded_for -- Paul Jurco On Wed, Feb 12, 2025 at 2:29 PM Ansgar Jazdzewski < a.jazdzew...@googlemail.com> wrote: > Hi folks, > > i like to make sure that the RadosGW is using the X-Forwarded-For as > source-ip for ACL's > However i do not find the information in the logs: > > i have set (using beast): > ceph config set global rgw_remote_addr_param http_x_forwarded_for > ceph config set global rgw_log_http_headers http_x_forwarded_for > > > https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_log_http_headers > > I hope someone can point me in the right direction! > > Thanks, > Ansgar > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] ceph rdb + libvirt
Hi, I need to create manually few VM with KVM. I would like to know if they are any difference between using a libvirt module and kernel module to access a ceph cluster. Regards -- Albert SHIH 🦫 🐸 Observatoire de Paris France Heure locale/Local time: mer. 12 févr. 2025 16:14:15 CET ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Squid 19.2.1 dashboard javascript error
Hi, could you share a screenshot (in some pastebin)? I'm not sure what exactly you're seeing. But apparently, muting works as I understand it. I wish I knew whether mon/mgr host changes erases all the mutings. I have just now given the command "ceph health mute OSD_UNREACHABLE 180d' -- for the second time this week, and now the board shows green. I scrolled back through the command list to very I did this. Indeed it was there. I'm not aware that a MGR failover would unmute anything, sounds unlikely to me. Is there a command line that lists active mutings -- one that's not used by the dashboard apparently? You can still check 'ceph health detail' which will also show the muted warning: ceph:~ # ceph health detail HEALTH_OK (muted: MON_DISK_LOW) (MUTED) [WRN] MON_DISK_LOW: mon ceph is low on available space mon.ceph has 40% avail Zitat von Harry G Coin : Yes, all the errors and warnings list as 'suppressed'. Doesn't affect the bug as reported below. Of some interest, "OSD_UNREACHABLE" is not listed on the dashboard alert roster of problems, but is in the command line health detail. But really, when all the errors list as 'suppressed', whatever they are, then the dashboard should show green. Instead it flashes red, along with !Critical as detailed below. I suspect what's really going on is the detection method for showing the 'red / yellow / green' decision and !Critical decision, is different than whether the length of unsilenced errors is >0. Even allowing for the possibility that many errors exist which could trigger HEALTH_ERR for which no entry in the roster of alerts exists. I wish I knew whether mon/mgr host changes erases all the mutings. I have just now given the command "ceph health mute OSD_UNREACHABLE 180d' -- for the second time this week, and now the board shows green. I scrolled back through the command list to very I did this. Indeed it was there. Is there a command line that lists active mutings -- one that's not used by the dashboard apparently? On 2/10/25 14:00, Eugen Block wrote: Hi, did you also mute the osd_unreachable warning? ceph health mute OSD_UNREACHABLE 10w Should bring the cluster back to HEALTH_OK for 10 weeks. Zitat von Harry G Coin : Hi Nizam Answers interposed below. On 2/10/25 11:56, Nizamudeen A wrote: Hey Harry, Do you see that for every alert or for some of them? If some, what are those? I just tried a couple of them locally and saw the dashboard went to a happy state. My sanbox/dev array has three chronic 'warnings/errors'. The first is a PG imbalance I'm aware of. The second is that all 27 osds are unreachable. The third is that the array has been in an error state for more than 5 minutes. Silencing/suppressing all of them still gives the 'red flashing broken dot' on the dashboard, the !Cluster status, notice of Alerts listing the previously suppressed errors/warnings. Under 'observability' we see no indications of errors/warnings under the 'alerts' menu option -- so you got that one right. Can you tell me how the ceph health or ceph health detail looks like after the muted alert? And also does ceph -s reports HEALTH_OK? root@noc1:~# ceph -s cluster: id: 40671140f8 health: HEALTH_ERR 27 osds(s) are not reachable services: mon: 5 daemons, quorum noc4,noc2,noc1,noc3,sysmon1 (age 10m) mgr: noc1.j(active, since 37m), standbys: noc2.yhx, noc3.b, noc4.tc mds: 1/1 daemons up, 3 standby osd: 27 osds: 27 up (since 14m), 27 in (since 5w) Ceph's actual core operations are otherwise normal. It's hard to sell ceph as a concept when showing all the storage is at once unreachable and up and in as well. Not a big confidence builder. Regards, Nizam On Mon, Feb 10, 2025 at 9:00 PM Harry G Coin wrote: In the same code area: If all the alerts are silenced, nevertheless the dashboard will not show 'green', but red or yellow depending on the nature of the silenced alerts. On 2/10/25 04:18, Nizamudeen A wrote: > Thank you Chris, > > I was able to reproduce this. We will look into it and send out a fix. > > Regards, > Nizam > > On Fri, Feb 7, 2025 at 10:35 PM Chris Palmer wrote: > >> Firstly thank you so much for the 19.2.1 release. Initial testing >> suggests that the blockers that we had in 19.2.0 have all been resolved, >> so we are proceeding with further testing. >> >> We have noticed one small problem in 19.2.1 that was not present in >> 19.2.0 though. We use the older-style dashboard >> (mgr/dashboard/FEATURE_TOGGLE_DASHBOARD false). The problem happens on >> the Dashboard screen when health changes to WARN. If you click on WARN >> you get a small empty dropdown instead of the list of warnings. A >> javascript error is logged, and using browser inspection there is the >> additional
[ceph-users] Re: ceph rdb + libvirt
I don't really understand what you mean with libvirt module. afaik this how you add add disk > > I need to create manually few VM with KVM. I would like to know if they > are > any difference between using a libvirt module and kernel module to > access a > ceph cluster. > > Regards > > > -- > Albert SHIH 🦫 🐸 > Observatoire de Paris > France > Heure locale/Local time: > mer. 12 févr. 2025 16:14:15 CET > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: ceph rdb + libvirt
Le 12/02/2025 à 15:35:53+, Marc a écrit Hi, > I don't really understand what you mean with libvirt module. It's exactly that. > > afaik this how you add add disk > > > > > > > > > > > > > > > but I also can mount the disk directly with /etc/ceph/rbdmap at the boot the disk will appear somewhere in /dev/sd* on the kvm server and then use it in kvm as a «normal» disk. Don't know if they are any difference or just a preference. Regards. -- Albert SHIH 🦫 🐸 Observatoire de Paris France Heure locale/Local time: mer. 12 févr. 2025 16:47:37 CET ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: ceph rdb + libvirt
Hi, I'm guessing you are deciding between librbd and krbd. I personally use krbd as in my original tests it was a bit faster. I think there are some cases where librbd is faster, but I don't remember those edge cases off the top of my head. That's my two cents. On Wed, Feb 12, 2025, 17:28 Albert Shih wrote: > Hi, > > I need to create manually few VM with KVM. I would like to know if they are > any difference between using a libvirt module and kernel module to access a > ceph cluster. > > Regards > > > -- > Albert SHIH 🦫 🐸 > Observatoire de Paris > France > Heure locale/Local time: > mer. 12 févr. 2025 16:14:15 CET > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: ceph iscsi gateway
On 10-02-2025 12:26, Iban Cabrillo wrote: Good morning, I wanted to inquire about the status of the Ceph iSCSI gateway service. We currently have several machines installed with this technology that are working correctly, although I have seen that it appears to be discontinued since 2022. My question is whether to continue down this path, adding machines, or whether it will be discontinued in new Ceph distributions. Best regards and thanks in advance. We are using LIO (targetcli) with rbd-nbd (can also be used with krbd). We have made it HA by modifying CTDB scripts (Samba HA solution) in a three node setup. To prevent split-brain a recovery lock on a ceph object is used (recovery lock = !/usr/local/bin/ctdb_mutex_ceph_rados_helper ceph client.$client poolname ctdb_sgw_rbd_recovery_lock 10). This setup also supports SCSI persistent reservations, a requirement for a version of MSSQL cluster that uses shared storage. This does not do any fancy stuff like tcmu-runner can do with ALUA (multi-path). So you are bound to a single node for performance. But for us this is more than good enough. As soon as NVMeOF supports persistent SCSI reservations we will make the shift to NVMeOF [1]. Gr. Stefan [1]: https://github.com/ceph/ceph-nvmeof/issues/41 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: ceph rdb + libvirt
The kernel code and Ceph proper are separate codebases, so if you’re running an old enough kernel KRBD may lack certain features, including compatibility with pg-upmap. I’m not sure about compatibility with RADOS / RBD namespaces. Last I knew KRBD did not have support for RBD QoS, so if you need to throttle client ops for noisy neighbor reasons, that would favor librbd, as would having a super old kernel, like on CentOS 7. > > Hi, > > I need to create manually few VM with KVM. I would like to know if they are > any difference between using a libvirt module and kernel module to access a > ceph cluster. > > Regards > > > -- > Albert SHIH 🦫 🐸 > Observatoire de Paris > France > Heure locale/Local time: > mer. 12 févr. 2025 16:14:15 CET > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: ceph rdb + libvirt
On 12-02-2025 16:49, Curt wrote: Hi, I'm guessing you are deciding between librbd and krbd. I personally use krbd as in my original tests it was a bit faster. I think there are some cases where librbd is faster, but I don't remember those edge cases off the top of my head. That's my two cents. You might find this benchmark made by 42on interesting: https://www.youtube.com/watch?v=iVCPjNZa7N0 Gr. Stefan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: NFS recommendations
Thanks, Alex! We certainly have more experience managing traditional NFS servers - either as standalone servers or in an HA cluster managed by pacemaker and corosync. And while I can imagine benefits to keeping our NFS exports managed separately from the backend Ceph cluster, it also seemed worth testing the NFS Ganesha support built into Ceph. Any other thoughts, experiences, or general recommendations would be greatly appreciated. Thanks again! Devin > On Feb 5, 2025, at 9:14 PM, Alex Buie wrote: > > Can confirm I ran into this bug within the past six months and we didn’t find > out until it caused an active outage. Definitely do not recommend KAd only > mode for nfs exports. > > > To be honest we ended up going with regular nfs-kernel-server on top of a > fuse mount of moosefs community edition for that install as it was a WORM > media server. Cephfs just wasn’t a great fit for reliability. Been working > wonders across the WAN now tho keeping all our sites in sync. > > Alex Buie > Senior Cloud Operations Engineer > 450 Century Pkwy # 100 Allen, TX 75013 > D: 469-884-0225 | www.cytracom.com > > > On Wed, Feb 5, 2025 at 9:10 PM Alexander Patrakov wrote: > Hello Devin, > > Last time I reviewed the code for orchestrating NFS, I was left > wondering how the keepalived-only mode can work at all. The reason is > that I found nothing that guarantees that the active NFS server and > the floating IP address would end up on the same node. This might have > been an old bug, already fixed since then - but I have not retested. > > On Thu, Feb 6, 2025 at 4:53 AM Devin A. Bougie > wrote: > > > > Hi, All. > > > > We are new to Ceph, and looking for any general best practices WRT > > exporting a Cephfs file system over NFS. I see several options in the > > documentation and have tested several different configurations, but haven’t > > yet seen much difference in our testing and aren’t sure exactly which > > configuration is generally recommended to start with. > > > > We have a single Cephfs filesystem in our cluster of 10 hosts. Five of our > > hosts are OSDs with the spinning disks that make up our Cephfs data pool, > > and only run the osd services (osd, crash, ceph-exporter, node-exporter, > > and promtail). The other five hosts are “admin” hosts that run everything > > else (mds, mgr, mon, etc.). > > > > Our current setup follows the "HIGH-AVAILABILITY NFS” documentation, which > > gives us an Ingress.nfs.cephfs service with the haproxy and keepalived > > daemons and a nfs.cephfs service for the actual nfs daemons. If there are > > no downsides to this approach, are there any recommendations on placement > > for these two services? Given our cluster, would it be best to run both on > > the admin nodes? Or would it be better to have the ingress.nfs.cephfs > > service on the admin nodes, and the backend nfs.cephfs services on the osd > > nodes? > > > > Alternatively, are there advantages in using the “keepalive only” mode > > (only keepalived, no haproxy)? Or does anyone recommend doing something > > completely different, like using Pacemaker and Corosync to manage our NFS > > services? > > > > Any recommendations one way or another would be greatly appreciated. > > > > Many thanks, > > Devin > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > > -- > Alexander Patrakov > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] cephadm orchestrator feature request: scheduled rebooting of cluster nodes
Hi, it just came across my mind that it would be nice if the orchestrator would be able to reboot all nodes of a cluster in an un-interrupting way. This could be done similar to the upgrade where the orchestrator checks if daemons on a host can be restarted. Including "ceph osd add-noout $(hostname)" before the reboot and removing it afterwards. Regards -- Robert Sander Linux Consultant Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: +49 30 405051 - 0 Fax: +49 30 405051 - 19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Ceph Day Silicon Valley 2025 - Registration and Call for Proposals Now Open!
Hi everyone, Secure your spot today at Ceph Day Silicon Valley - registration for the event is now open! As a reminder, the CFP closes next week. Find more details in https://ceph.io/en/community/events/2025/ceph-days-almaden/. Thanks, Neha On Thu, Jan 30, 2025 at 12:51 PM Neha Ojha wrote: > > Dear Ceph Community, > > We are very excited to announce that Ceph Day is returning to Silicon > Valley after seven years! On March 25, 2025, we will be hosting Ceph > Day at IBM Almaden Research Center in San Jose, CA. We are now > accepting talk proposals! For more details, visit > https://ceph.io/en/community/events/2025/ceph-days-almaden/. > > We look forward to bringing the Ceph community together once again! > > Cheers, > Neha ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: ceph rdb + libvirt
> but I also can mount the disk directly with > > /etc/ceph/rbdmap > > at the boot the disk will appear somewhere in /dev/sd* on the kvm server > and then use it in kvm as a «normal» disk. > Don't know if they are any difference or just a preference. If you mount with KRBD and the ceph cluster acts up, the KVM host might stop serving IO, which would affect lots more than if KVM uses librbd to mount it for the guest only, in the second case, the host and possibly other guests might run on undisturbed. -- May the most significant bit of your life be positive. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Radosgw log Custom Headers
What about something like this in rgw section in ceph.conf? rgw_enable_ops_log = true rgw_log_http_headers = http_x_forwarded_for, http_expect, http_content_md5 rgw_ops_log_file_path = /var/log/ceph/mon1.rgw-ops.log Rok On Wed, Feb 12, 2025 at 2:19 PM Paul JURCO wrote: > Same here, it worked only after rgw service was restarted using this > config: > rgw_log_http_headers http_x_forwarded_for > > -- > Paul Jurco > > > On Wed, Feb 12, 2025 at 2:29 PM Ansgar Jazdzewski < > a.jazdzew...@googlemail.com> wrote: > > > Hi folks, > > > > i like to make sure that the RadosGW is using the X-Forwarded-For as > > source-ip for ACL's > > However i do not find the information in the logs: > > > > i have set (using beast): > > ceph config set global rgw_remote_addr_param http_x_forwarded_for > > ceph config set global rgw_log_http_headers http_x_forwarded_for > > > > > > > https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_log_http_headers > > > > I hope someone can point me in the right direction! > > > > Thanks, > > Ansgar > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: cephadm orchestrator feature request: scheduled rebooting of cluster nodes
Hey Robert, That's an interesting idea. I think it would also be great if we could gracefully restart all daemons in a cluster (without a reboot). The implementation would indeed be like an upgrade, without the newer version. (Orchestrator needs to check ok-to-stop before stopping a daemon). Cheers, Dan On Wed, Feb 12, 2025 at 8:41 AM Robert Sander wrote: > > Hi, > > it just came across my mind that it would be nice if the orchestrator > would be able to reboot all nodes of a cluster in an un-interrupting way. > This could be done similar to the upgrade where the orchestrator checks > if daemons on a host can be restarted. > Including "ceph osd add-noout $(hostname)" before the reboot and > removing it afterwards. > > Regards > -- > Robert Sander > Linux Consultant > > Heinlein Consulting GmbH > Schwedter Str. 8/9b, 10119 Berlin > > https://www.heinlein-support.de > > Tel: +49 30 405051 - 0 > Fax: +49 30 405051 - 19 > > Amtsgericht Berlin-Charlottenburg - HRB 220009 B > Geschäftsführer: Peer Heinlein - Sitz: Berlin > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Dan van der Ster CTO @ CLYSO Try our Ceph Analyzer -- https://analyzer.clyso.com/ https://clyso.com | dan.vanders...@clyso.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io