500K object size
> Op 13 jun 2024 om 21:11 heeft Anthony D'Atri het
> volgende geschreven:
>
> How large are the objects you tested with?
>
>> On Jun 13, 2024, at 14:46, si...@turka.nl wrote:
>>
>> I have doing some further testing.
>>
>> My RGW pool is placed on spinning disks.
>> I crea
I don’t have much experience with the dashboard.
Can you try radosgw-admin bucket rm and pass --bypass-gc --purge-objects
> Op 18 jun 2024 om 17:44 heeft Simon Oosthoek het
> volgende geschreven:
>
> Hi
>
> when deleting an S3 bucket, the operation took longer than the time-out for
> the das
Are the weights correctly set? So 1.6 for a 1.6TB disk and 1.0 for 1TB disks
and so on.
> Op 19 jun 2024 om 08:32 heeft Jan Marquardt het volgende
> geschreven:
>
>
>> Our ceph cluster uses 260 osds.
>> The most highest osd usage is 87% But, The most lowest is under 40%.
>> We consider low
Hello,
I am currently managing a Ceph cluster that consists of 3 racks, each with
4 OSD nodes. Each node contains 24 OSDs. I plan to add three new nodes, one
to each rack, to help alleviate the high OSD utilization.
The current highest OSD utilization is 85%. I am concerned about the
possibility
What is the output of:
ceph osd dump | grep require_osd_release
Have you upgraded OpenStack (librados) as well?
Op do 15 mei 2025 om 10:50 schreef Sergio Rabellino :
> Dear list,
>
> we are upgrading our ceph infrastructure from mimic to octopus (please
> be kind, we known that we are working
Swap usage is not per se wrong - as long as you have enough available
memory. However, it can still lead to performance issues. If you don't want
to get rid of swap, check your swapiness settings
(cat /proc/sys/vm/swappiness).
Op vr 23 mei 2025 om 14:53 schreef Anthony D'Atri :
>
>
> > Swap can p
Hi all,
I am investigating a performance issue related to uploading s3 objects and
would appreciate any insights.
I have a bucket with ~200 million objects. I noticed that re-uploading
files that already exist in the bucket appears to be slower than uploading
them for the first time.
To verify t
Hi all,
I am running a Ceph Octopus cluster deployed with cephadm. I have three RGW
nodes, the RGW containers are writing their logs to
/var/lib/docker/containers//-json.log
Over time this causes the mount point to become 100% utilized.
What is the recommended way to manage or limit these logs?
-
Hi Vivien,
Your fio test has very likely destroyed the Ceph OSD block device and the
problem is not just the symlink, it's data corruption on the underlying
device.
Zap the drive, recreate the OSD and let your cluster rebalance.
Sinan
Op wo 23 jul 2025 om 14:10 schreef GLE, Vivien :
> Hi,
>
>
Not recommended, better to fix the problem, but what you can try is running
your commands like: PYTHONHTTPSVERIFY=0
Also go to /etc/yum.repos.d/ and find the ceph repo files and add:
sslverify = 0
But again, this is not recommended. It is just a not-safe workaround that
might work for you.
Op w
10 matches
Mail list logo