Jacek Suchenia writes:
> On two of our clusters (all v14.2.8) we observe a very strange behavior:
> Over a time rgw_qactive perf is constantly growing, within 12h to 6k entries.
As another data point, we're seeing the same here, on one of our two
clusters, both also running 14.2.8.
The growth
Mariusz Gronczewski writes:
> listing itself is bugged in version
> I'm running: https://tracker.ceph.com/issues/45955
Ouch! Are your OSDs all running the same version as your RadosGW? The
message looks a bit as if your RadosGW might be a newer version than the
OSDs, and the new optimized bucket l
Dear Mariusz,
> we're using Ceph as S3-compatible storage to serve static files (mostly
> css/js/images + some videos) and I've noticed that there seem to be
> huge read amplification for index pool.
we have observed that too, under Nautilus (14.2.4-14.2.8).
> Incoming traffic magniture is of ar
Sorry for following up on myself (again), but I had left out an
important detail:
Simon Leinen writes:
> Using the "stupid" allocator, we never had any crashes with this
> assert. But the OSDs run more slowly this way.
> So what we ended up doing was: When an OSD crashed w
Igor Fedotov writes:
> 2) Main device space is highly fragmented - 0.84012572151981013 where
> 1.0 is the maximum. Can't say for sure but I presume it's pretty full
> as well.
As I said, these disks aren't that full as far as bytes are concerned.
But they do have a lot of objects on them! As I sai
Simon Leinen writes:
>> I can suggest the following workarounds to start the OSD for now:
>> 1) switch allocator to stupid by setting 'bluestore allocator'
>> parameter to 'stupid'. Presume you have default setting of 'bitmap'
>> now.. This w
Dear Igor,
thanks a lot for the analysis and recommendations.
> Here is a brief analysis:
> 1) Your DB is pretty large - 27GB at DB device (making it full) and
> 279GB at main spinning one. I.e. RocksDBÂ is experiencing huge
> spillover to slow main device - expect performance drop. And generall
AdminSocket:
request '{"prefix": "bluestore allocator score bluefs-wal"}' not defined
------
See anything interesting?
--
Simon.
> Thanks,
> Igor
> On 5/29/2020 1:05 PM, Simon Leinen wrote:
>&g
Colleague of Harry's here...
Harald Staub writes:
> This is again about our bad cluster, with too much objects, and the
> hdd OSDs have a DB device that is (much) too small (e.g. 20 GB, i.e. 3
> GB usable). Now several OSDs do not come up any more.
> Typical error message:
> /build/ceph-14.2.8/sr
We have been using RadosGW with Keystone integration for a couple of
years, to allow users of our OpenStack-based IaaS to create their own
credentials for our object store. This has caused us a fair amount of
performance headaches.
Last year, Jjames Weaver (BBC) has contributed a patch (PR #26095
Kristof Coucke writes:
> I have an issue on my Ceph cluster.
> For one of my pools I have 107TiB STORED and 298TiB USED.
> This is strange, since I've configured erasure coding (6 data chunks, 3
> coding chunks).
> So, in an ideal world this should result in approx. 160.5TiB USED.
> The question n
11 matches
Mail list logo