Hi,
> On 26 Nov 2024, at 16:10, Matthew Darwin wrote:
>
> I guess there is a missing dependency (which really should be
> auto-installed), which is not also documented in the release notes as a new
> requirement. This seems to fix it:
This caused by [1], the fix was not backported to quincy,
Hi,
> On 31 Jan 2025, at 17:25, Preisler, Patrick wrote:
>
> we would like to have a detailed usage report for our S3 Buckets. I already
> installed this rgw exporter
> https://github.com/blemmenes/radosgw_usage_exporter and it does provide
> useful information about the buckets and the amoun
I concur strongly with Matthew’s assessment.
k
Sent from my iPhone
> On 6 Feb 2025, at 16:13, Matthew Leonard (BLOOMBERG/ 120 PARK)
> wrote:
>
> Bloomberg is mainly agnostic to the time delay, obviously getting back in
> alignment with OS releases is ideal.
>
> We cannot overstate our agree
Hi,
The respondents are not confused in the topic, only once again it is
highlighted that releases once every 9 months are not really necessary,
considering that the team presented the last bugfix release in July of last
year (now it is February). Regarding containers, once again the community
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
You can always consult with Releases page [1]
Thanks,
k
[1] https://github.com/prometheus-community/smartctl_exporter/releases
Sent from my iPhone
> On 9 Apr 2025, at 17:51, Anthony D'Atri wrote:
>
> Unless something has changed with smartctl_exporter, there wasn’t working
> support for
Hi,
It's will be very nice, if this module will be removed. Everything that Ceph
operator need can be covered via smartctl_exporter [1]
Thanks,
k
[1] https://github.com/prometheus-community/smartctl_exporter
Sent from my iPhone
> On 8 Apr 2025, at 02:20, Yaarit Hatuka wrote:
>
> We would l
Hi,
> On 11 Apr 2025, at 09:59, Iban Cabrillo wrote:
>
> 10.10.3.1:3300,10.10.3.2:3300,10.10.3.3:3300:/ /cephvmsfs ceph
> name=cephvmsfs,secretfile=/etc/ceph/cephvmsfs.secret,noatime,mds_namespace=cephvmsfs,_netdev
> 0 0
Try add the ms_mode option, because you use msgr2 protocol. For example,
Hi,
> On 11 Apr 2025, at 10:53, Alex from North wrote:
>
> Hello Tim! First of all, thanks for the detailed answer!
> Yes, probably in set up of 4 nodes by 116 OSD it looks a bit overloaded, but
> what if I have 10 nodes? Yes, nodes itself are still heavy but in a row it
> seems to be not that
Hi,
> On 21 Apr 2025, at 15:02, 段世博 wrote:
>
> Hi everyone, I would like to ask about the compatibility of librdb
> versions. If the client version (librbd) used is 17.2.8, but the backend
> cluster is 16.2.15 or 15.2.17, or even lower versions, can it be accessed
> normally? Any reply will be v
Hi,
> On 8 May 2025, at 23:12, Erwin Bogaard wrote:
>
> It looks like there is an issue with the package-based 18.2.7 release, when
> upgrading from 18.2.6 on el9.
> There seems to be a new (unfulfilled) dependency that prevents the packages
> from installing:
>
> Problem 1: cannot install the
Hi,
> On 20 May 2025, at 11:26, farhad kh wrote:
>
> Hi, I need to install ceph-common from the Quincy repository, but I'm
> getting this error:
> ---
> Ceph x86_64
> 0.0 B/s | 0 B 00:01
> Errors during downloading metadata for repository 'Ceph':
> - Status code: 404
Hi,
> On 27 May 2025, at 10:54, Szabo, Istvan (Agoda)
> wrote:
>
> Some status update, it's finished with 3x times stop and start the rebalance.
> Would be interesting to know what is the extra data generated on the new osds
> during remapped pg allocation at rebalance. I stopped when the osd
Hi,
Thanks for the perfect overview of the Reef release! I'll steal this as is for
the slide, for another overview of why it's important to have an update
strategy. Sometimes folks don't understand why our 75 clusters are using the
Nautilus or Pacific release
Thanks,
k
Sent from my iPhone
>
Hi,
When deploying new project, we discovered poor performance [1] of hard drives
Western Digital Ultrastar DC HC560 (20TB, WUH722020BLE6L4). It consists in the
fact that with an object size of less than 128k, the speed of the Ceph pool is
equivalent to a USB flash drive. Be vigilant when make
Hi,
> On 4 Jul 2025, at 13:14, Marc wrote:
>
> How is it worse than any other hdd of that size?
At the moment we have just under 3000pcs of Toshiba MG10ACA and we have not
registered any such issues
k
___
ceph-users mailing list -- ceph-users@ceph
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
401 - 418 of 418 matches
Mail list logo