Den tis 6 okt. 2020 kl 08:37 skrev Szabo, Istvan (Agoda) <
istvan.sz...@agoda.com>:
>
> Hi,
> Is there anybody tried consul as a load balancer?
> Any experience?
For rgw, load balancing is quite simple, and I guess almost any LB would
work.
The only major thing we have hit is that for AWS4 auth, y
Mark, Why do you use io_depth=32 in fio parameters?
Is there any reason for not choose 16 or 64?
Thanks in advance!
>> I don't have super recent results, but we do have some test data from last
>> year looking at kernel rbd, rbd-nbd, rbd+tcmu, fuse, etc:
>>
>>
>> https://docs.google.com/spread
Hi Anthony,
Not my area of expertise I'm afraid. I did most of this testing when I
was adding the "client endpoints" support to CBT so we could use the
same fio benchmark code across the whole range of ceph block/fs
clients. One of the RBD guys might be able to answer your questions
though
Thanks, Mark.
I’m interested as well, wanting to provide block service to baremetal hosts;
iSCSI seems to be the classic way to do that.
I know there’s some work on MS Windows RBD code, but I’m uncertain if it’s
production-worthy, and if RBD namespaces suffice for tenant isolation — and are
I don't have super recent results, but we do have some test data from
last year looking at kernel rbd, rbd-nbd, rbd+tcmu, fuse, etc:
https://docs.google.com/spreadsheets/d/1oJZ036QDbJQgv2gXts1oKKhMOKXrOI2XLTkvlsl9bUs/edit?usp=sharing
Generally speaking going through the tcmu layer was slower
All;
I've finally gotten around to setting up iSCSI gateways on my primary
production cluster, and performance is terrible.
We're talking 1/4 to 1/3 of our current solution.
I see no evidence of network congestion on any involved network link. I see no
evidence CPU or memory being a problem o
Hello, we have integrated Ceph's RGW with LDAP and have authenticated users
using the mail attribute successfully. We would like to shift to SSO and are
evaluating the new OIDC feature in Ceph together with dexIdP with an LDAP
connector as an upstream IdP.
We are trying to understand the flow o
Hello,
MDS process crashed suddently. After trying to restart it, it failed to replay
journal and started to restart continually.
Just to summarize, here is what happened :
1/ The cluster is up and running with 3 nodes (mon and mds in the same nodes)
and 3 OSD.
2/ After a few days, 2 (standby
Hi Casey,
thank you for your reply, could you give me more details?
On Wed, Sep 30, 2020 at 3:56 PM Casey Bodley wrote:
> On Wed, Sep 30, 2020 at 5:20 AM Eugeniy Khvastunov
> wrote:
> >
> > Hi ceph-users,
> >
> > Looks like we faced a broken S3 multipart upload in Ceph 12.2.11 Luminous
> > (MCP