Hello all,
We have an EC (4+2) pool for RGW data, with HDDs + SSDs for WAL/DB. This
pool has 9 servers with each 12 disks of 16TBs. About 10 days ago we lost a
server and we've removed its OSDs from the cluster. Ceph has started to
remap and backfill as expected, but the process has been getting s
Hi Chris,
This looks useful. Note for this thread: this *looks like* it's using the
zipper dbstore backend? Yes, that's coming in Reef. We think of dbstore
as mostly the zipper reference driver, but it can be useful as a standalone
setup, potentially.
But there's now a prototype of a posix fi
Is this by chance corrected in 17.2.5 already? If so, how can we pivot
mid-upgrade to 17.2.5?
Thanks,
On Mon, Mar 20, 2023 at 6:14 PM Marco Pizzolo
wrote:
> Hello Everyone,
>
> We made the mistake of trying to patch to 16.2.11 from 16.2.10 which has
> been stable as we felt that 16.2.11 had be
Hello Everyone,
We made the mistake of trying to patch to 16.2.11 from 16.2.10 which has
been stable as we felt that 16.2.11 had been out for a while already.
As luck would have it, we are having failure after failure with OSDs not
upgrading successfully, and have 355 more OSDs to go.
I'm pretty
On 3/20/23 12:02, Frank Schilder wrote:
Hi Marc,
I'm also interested in an S3 service that uses a file system as a back-end. I
looked at the documentation of https://github.com/aquarist-labs/s3gw and have
to say that it doesn't make much sense to me. I don't see this kind of gateway
anywhere
https://github.com/ceph/ceph/pull/47702?notification_referrer_id=NT_kwDOANWUT7M0MjI5MDgwMzE3OjEzOTk3MTM1¬ifications_query=repo%3Aceph%2Fceph#issuecomment-1423732280
This issue had been backported to pacific, and the release time of v16.2.12 is?
___
ceph-
On 3/20/23 12:02, Frank Schilder wrote:
Hi Marc,
I'm also interested in an S3 service that uses a file system as a back-end. I
looked at the documentation ofhttps://github.com/aquarist-labs/s3gw and have
to say that it doesn't make much sense to me. I don't see this kind of gateway
anywhere
Hi,
I'm facing a strange issue and google doesn't seem to help me.
I've a couple of clusters with Octopus v15.2.17, recently upgraded from
15.2.13
I had a rbd mirror service correctly working between the two clusters, i
then updated, and after some days where all was ok, I've come to the
situatio
Hi Marc,
I'm also interested in an S3 service that uses a file system as a back-end. I
looked at the documentation of https://github.com/aquarist-labs/s3gw and have
to say that it doesn't make much sense to me. I don't see this kind of gateway
anywhere there. What I see is a build of a rados ga
Den mån 20 mars 2023 kl 09:45 skrev Marc :
>
> > While
> > reading, we barely hit the mark of 100MB/s; we would expect at least
> > something similar to the write speed. These tests are being performed in
> > a
> > pool with a replication factor of 3.
> >
> >
>
> You don't even describe how you tes
Has anyone used almalinux 9 to install ceph. Have you encountered problems?
Other tips on this installation are also welcome.
I have installed Ceph on AlmaLinux 9.1 (both Ceph and later Ceph/Rook)
on a three node VM cluster and then a three node bare metal cluster
(with 4 OSDs each) without
Hello,
Has anyone used almalinux 9 to install ceph. Have you encountered problems?
Other tips on this installation are also welcome.
Regards,
Gerrit
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph
> While
> reading, we barely hit the mark of 100MB/s; we would expect at least
> something similar to the write speed. These tests are being performed in
> a
> pool with a replication factor of 3.
>
>
You don't even describe how you test? And why would you expect something like
the write speed,
13 matches
Mail list logo