You can compile this iscsi stuff not? I did this recently with a recent ceph
version. But noticed it was performing less than just a rbd mount.
> Okay Gregory,
>Bad news for me, I will have to find another way. The truth is that
> from the operational and long-term maintenance point of vie
Hi folks,
i like to make sure that the RadosGW is using the X-Forwarded-For as
source-ip for ACL's
However i do not find the information in the logs:
i have set (using beast):
ceph config set global rgw_remote_addr_param http_x_forwarded_for
ceph config set global rgw_log_http_headers http_x_forw
Yes I can remember testing this in the past, also did not see any difference.
> I've played with these settings and didn't notice any benefit.
> Tools for changing sector size on HDDs are pretty terrible and it's
> really easy to brick your HDD when using incorrectly.
> I wouldn't recommend.
>
Same here, it worked only after rgw service was restarted using this config:
rgw_log_http_headers http_x_forwarded_for
--
Paul Jurco
On Wed, Feb 12, 2025 at 2:29 PM Ansgar Jazdzewski <
a.jazdzew...@googlemail.com> wrote:
> Hi folks,
>
> i like to make sure that the RadosGW is using the X-For
Hi,
I need to create manually few VM with KVM. I would like to know if they are
any difference between using a libvirt module and kernel module to access a
ceph cluster.
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
mer. 12 févr. 2025 16:14:15 CET
__
Hi,
could you share a screenshot (in some pastebin)? I'm not sure what
exactly you're seeing. But apparently, muting works as I understand it.
I wish I knew whether mon/mgr host changes erases all the mutings.
I have just now given the command "ceph health mute OSD_UNREACHABLE
180d' --
I don't really understand what you mean with libvirt module.
afaik this how you add add disk
>
> I need to create manually few VM with KVM. I would like to know if they
> are
> any difference bet
Le 12/02/2025 à 15:35:53+, Marc a écrit
Hi,
> I don't really understand what you mean with libvirt module.
It's exactly that.
>
> afaik this how you add add disk
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Hi,
I'm guessing you are deciding between librbd and krbd. I personally use
krbd as in my original tests it was a bit faster. I think there are some
cases where librbd is faster, but I don't remember those edge cases off the
top of my head.
That's my two cents.
On Wed, Feb 12, 2025, 17:28 Albert
On 10-02-2025 12:26, Iban Cabrillo wrote:
Good morning,
I wanted to inquire about the status of the Ceph iSCSI gateway service. We
currently have several machines installed with this technology that are working
correctly,
although I have seen that it appears to be discontinued since 2022. M
The kernel code and Ceph proper are separate codebases, so if you’re running an
old enough kernel KRBD may lack certain features, including compatibility with
pg-upmap. I’m not sure about compatibility with RADOS / RBD namespaces.
Last I knew KRBD did not have support for RBD QoS, so if you nee
On 12-02-2025 16:49, Curt wrote:
Hi,
I'm guessing you are deciding between librbd and krbd. I personally use
krbd as in my original tests it was a bit faster. I think there are some
cases where librbd is faster, but I don't remember those edge cases off the
top of my head.
That's my two cents.
Thanks, Alex! We certainly have more experience managing traditional NFS
servers - either as standalone servers or in an HA cluster managed by pacemaker
and corosync. And while I can imagine benefits to keeping our NFS exports
managed separately from the backend Ceph cluster, it also seemed wo
Hi,
it just came across my mind that it would be nice if the orchestrator
would be able to reboot all nodes of a cluster in an un-interrupting way.
This could be done similar to the upgrade where the orchestrator checks
if daemons on a host can be restarted.
Including "ceph osd add-noout $(host
Hi everyone,
Secure your spot today at Ceph Day Silicon Valley - registration for
the event is now open! As a reminder, the CFP closes next week. Find
more details in
https://ceph.io/en/community/events/2025/ceph-days-almaden/.
Thanks,
Neha
On Thu, Jan 30, 2025 at 12:51 PM Neha Ojha wrote:
>
>
> but I also can mount the disk directly with
>
> /etc/ceph/rbdmap
>
> at the boot the disk will appear somewhere in /dev/sd* on the kvm server
> and then use it in kvm as a «normal» disk.
> Don't know if they are any difference or just a preference.
If you mount with KRBD and the ceph cluster a
What about something like this in rgw section in ceph.conf?
rgw_enable_ops_log = true
rgw_log_http_headers = http_x_forwarded_for, http_expect, http_content_md5
rgw_ops_log_file_path = /var/log/ceph/mon1.rgw-ops.log
Rok
On Wed, Feb 12, 2025 at 2:19 PM Paul JURCO wrote:
> Same here, it worked
Hey Robert,
That's an interesting idea. I think it would also be great if we could
gracefully restart all daemons in a cluster (without a reboot).
The implementation would indeed be like an upgrade, without the newer
version. (Orchestrator needs to check ok-to-stop before stopping a
daemon).
Chee
18 matches
Mail list logo