I have doing some further testing.
My RGW pool is placed on spinning disks.
I created a 2nd RGW data pool, placed on flash disks.
Benchmarking on HDD pool:
Client 1 -> 1 RGW Node: 150 obj/s
Client 1-5 -> 1 RGW Node: 150 ob/s (30 obj/s each client)
Client 1 -> HAProxy -> 3 RGW Nodes: 150 obj/s
Cl
D now. Index doesn't use all that much
data, but benefits from a generous pg_num and multiple OSDs so that it
isn't bottlenecked.
On Jun 13, 2024, at 15:13, Sinan Polat wrote:
500K object size
Op 13 jun 2024 om 21:11 heeft Anthony D'Atri
het volgende geschreven:
How larg
I created a new placement target/pool. I don't have the exact commands
anymore, but something similar to:
---
$ radosgw-admin zonegroup placement add \
--rgw-zonegroup default \
--placement-id temporary
$ radosgw-admin zone placement add \
--rgw-zone default \
--placement
I recreated the placement target and was able to remove the bucket, so
thats fixed.
Since the bucket is removed, I have deleted the placement target.
When I restart RGW, I am getting the following during startup:
debug 2024-08-27T13:34:55.850+ 7f25d9b0f280 0 WARNING: This zone
does not con
should not be able to create new or delete buckets.
One approach could be to limit the max_buckets to 1 so the user cannot
create new buckets, but it will still have access to other buckets and
will able to delete buckets.
Any advice here? Thanks!
Sinan
I want to achieve the following:
- Create an user
- Create 2 subusers
- Create 2 buckets
- Apply a policy for each bucket
- A subuser should only have access to its own bucket
Problem:
Getting a 403 AccessDenied with subuser credentials when uploading
files.
I did the following:
radosgw-ad
Hi all,
My Ceph setup:
- 12 OSD nodes, 4 OSD nodes per rack. Replication of 3, 1 replica per
rack.
- 20 spinning SAS disks per node.
- Some nodes have 256GB RAM, some nodes 128GB.
- CPU varies between Intel E5-2650 and Intel Gold 5317.
- Each node has 10Gbit/s network.
Using rados bench I am g
On 2024-06-10 15:20, Anthony D'Atri wrote:
Hi all,
My Ceph setup:
- 12 OSD nodes, 4 OSD nodes per rack. Replication of 3, 1 replica per
rack.
- 20 spinning SAS disks per node.
Don't use legacy HDDs if you care about performance.
You are right here, but we use Ceph mainly for RBD. It perfor
On 2024-06-10 17:42, Anthony D'Atri wrote:
- 20 spinning SAS disks per node.
Don't use legacy HDDs if you care about performance.
You are right here, but we use Ceph mainly for RBD. It performs 'good
enough' for our RBD load.
You use RBD for archival?
No, storage for (light-weight) virtua
On 2024-06-10 21:37, Anthony D'Atri wrote:
You are right here, but we use Ceph mainly for RBD. It performs
'good enough' for our RBD load.
You use RBD for archival?
No, storage for (light-weight) virtual machines.
I'm surprised that it's enough, I've seen HDDs fail miserably in that
role.
On 2024-06-11 01:01, Anthony D'Atri wrote:
To be clear, you don't need more nodes. You can add RGWs to the ones
you already have. You have 12 OSD nodes - why not put an RGW on
each?
Might be an option, just don't like the idea to host multiple
components on nodes. But I'll consider it.
I
e
message:
"Unable to find further optimization, or pool(s) pg_num is decreasing,
or distribution is already perfect"
I have about 280 OSD's:
24 OSD's per node, 4 nodes per rack, 3 racks in total. Replica = 3. 1
replica per rack.
My disk sizes differs fr
On 2024-11-27 17:53, Anthony D'Atri wrote:
Hi,
My Ceph cluster is out-of-balance. The amount of PG's per OSD ranges
from about 50 up to 100 PG's per OSD. This is far from balanced.
Do you have multiple CRUSH roots or device classes? Are all OSDs the
same weight?
Yes, I have 2 CRUSH roots
So the balancer is working as expected; it is normal that it does or
cannot further balance?
Any other suggestions here?
On 2024-11-27 18:05, Anthony D'Atri wrote:
In your situation the JJ Balancer might help.
On 2024-11-27 17:53, Anthony D'Atri wrote:
Hi,
My Ceph cluster is out-of-balance
500K object size
> Op 13 jun 2024 om 21:11 heeft Anthony D'Atri het
> volgende geschreven:
>
> How large are the objects you tested with?
>
>> On Jun 13, 2024, at 14:46, si...@turka.nl wrote:
>>
>> I have doing some further testing.
>>
>> My RGW pool is placed on spinning disks.
>> I crea
I don’t have much experience with the dashboard.
Can you try radosgw-admin bucket rm and pass --bypass-gc --purge-objects
> Op 18 jun 2024 om 17:44 heeft Simon Oosthoek het
> volgende geschreven:
>
> Hi
>
> when deleting an S3 bucket, the operation took longer than the time-out for
> the das
Are the weights correctly set? So 1.6 for a 1.6TB disk and 1.0 for 1TB disks
and so on.
> Op 19 jun 2024 om 08:32 heeft Jan Marquardt het volgende
> geschreven:
>
>
>> Our ceph cluster uses 260 osds.
>> The most highest osd usage is 87% But, The most lowest is under 40%.
>> We consider low
Hello,
I am currently managing a Ceph cluster that consists of 3 racks, each with
4 OSD nodes. Each node contains 24 OSDs. I plan to add three new nodes, one
to each rack, to help alleviate the high OSD utilization.
The current highest OSD utilization is 85%. I am concerned about the
possibility
18 matches
Mail list logo