Hi Frederic,
> We've been successfully using PetaSAN [1] iSCSI gateways with an external
> Ceph cluster (not the one deployed by PetaSAN itself) on VMware hypervisors
> (135 VMs, 75TB) for a year and a half.
I am interested in this.
We are currently using a TrueNAS VM on our Proxmox cluster th
Hi Iban,
If using NVME-oF, which is meant to replace iSCSI in Ceph, is not feasible for
you, you could take a look at PetaSAN's iSCSI implementation.
We've been successfully using PetaSAN [1] iSCSI gateways with an external Ceph
cluster (not the one deployed by PetaSAN itself) on VMware hypervi
Hi,
I haven't really see much about this kind of event in ceph KB:
"event": "header_read",
"time": "2025-02-18T14:20:00.610389+0700",
"duration": 4294967295.99
However we have huge latencies caused by this operation:
"description": "osd_repop(client.3694832962.0:140533664 26.561
Hi,
On 18.02.25 05:05, 苏察哈尔灿 wrote:
root@backup-server:/mnt# du
rbd/Win2003-002-198-bak/Win2003-002-198-bak_5-flat.vmdk -h
1.9G rbd/Win2003-002-198-bak/ win2003-002-198-bak_5-flat-.vmdk
After copying to cephfs:
root@backup-server:/mnt# du cephfs/Win2003-002-198-bak_5-flat.vmdk -h
30G
Hello Eugen and all,
Thanks for the reply. We’ve checked the SuSE doc before raising it twice. From
100k to 125k, then to 150k.
We are a bit worried about the continuous growth of strays at 50K a day and
would like to find an effective to reduce the strays.
Last night another 30K increase in
Hi,
that's an interesting observation, I haven't heard anything like that
yet. More responses inline...
Zitat von Jeremi-Ernst Avenant :
Hi all,
I recently migrated my Ceph cluster from *ceph-ansible* to *cephadm* (about
five months ago) and upgraded from *Pacific 16.2.11* to *Quincy (lat
Hi Vignesh,
So a few questions :-
How many PG’s have you got configured for the Ceph pool that you are testing
against?
If this number is not large enough then you may only be using a subset of the
devices available to use.
Have you tried the same benchmark without the replication setup?
Tryin
Hi all,
I recently migrated my Ceph cluster from *ceph-ansible* to *cephadm* (about
five months ago) and upgraded from *Pacific 16.2.11* to *Quincy (latest at
the time)*, followed by an upgrade to *Reef 18.2.4* two months later - due
to running an unsupported version of Ceph. Since this migration