Hey folks,
We have a 16.2.7 cephadm cluster that's had slow ops and several (constantly
changing) laggy PGs. The set of OSDs with slow ops seems to change at random,
among all 6 OSD hosts in the cluster. All drives are enterprise SATA SSDs, by
either Intel or Micron. We're still not ruling out
6.2.7?
> > >
> > > What is your pool configuration (EC k+m or replicated X setup), and do you
> > > use the same pool for indexes and data? I'm assuming this is RGW usage via
> > > the S3 API, let us know if this is not correct.
> > >
> >
Hey y'all -
As a datapoint, I *don't* see this issue on 5.17.4-200.fc35.x86_64. Hosts are
Fedora 35 server, with 17.2.0. Happy to test or provide more data from this
cluster if it would be helpful.
-Alex
On May 11, 2022, 2:02 PM -0400, David Rivera , wrote:
> Hi,
>
> My experience is similar, I
Ubuntu noble *is* an LTS release, 24.04
> On Oct 28, 2024, at 06:40, Robert Sander wrote:
>
> Hi
>
>> On 10/25/24 19:57, Daniel Brown wrote:
>> Think I’ve asked this before but — has anyone attempted to use a cephadm
>> type install with Debian Nobel running on Arm64? Have tried both Reef an