Hi Pablo,
> We are willing to work with a Ceph Consultant Specialist, because the data
> at stage is very critical, so if you're interested please let me know
> off-list, to discuss the details.
I totally understand that you want to communicate with potential consultants
off-list,
but I, and ma
Here’s a test after de-crufting held messages. Grok the fullness.
— aad
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all, I have configured a ceph cluster to be used as an object store with a
combination of SSDs and HDDs, where the block.db is stored on LVM on SSDs and
the OSD block is stored on HDDs.
I have set up one SSD for storing metadata (rocksDB), and five HDDs are
associated with it to store the OS
Hi community!
Recently we had a major outage in production and after running the
automated ceph recovery, some PGs remain in "incomplete" state, and IO
operations are blocked.
Searching in documentation, forums, and this mailing list archive, I
haven't found yet if this means this data is recover
On Mon, Jun 24, 2024 at 5:22 PM Dietmar Rieder
wrote:
>
> (resending this, the original message seems that it didn't make it through
> between all the SPAM recently sent to the list, my apologies if it doubles at
> some point)
>
> Hi List,
>
> we are still struggeling to get our cephfs back onli
(resending this, the original message seems that it didn't make it through
between all the SPAM recently sent to the list, my apologies if it doubles at
some point)
Hi List,
we are still struggeling to get our cephfs back online again, this is an update
to inform you what we did so far, and w
I’m not sure if I have access but I can try.
> On Jun 24, 2024, at 4:37 PM, Kai Stian Olstad wrote:
>
> On 24.06.2024 19:15, Anthony D'Atri wrote:
>> * Subscription is now moderated
>> * The three worst spammers (you know who they are) have been removed
>> * I’ve deleted tens of thousands of cru
On 24.06.2024 19:15, Anthony D'Atri wrote:
* Subscription is now moderated
* The three worst spammers (you know who they are) have been removed
* I’ve deleted tens of thousands of crufty mail messages from the queue
The list should work normally now. Working on the backlog of held
messages. 9
Hi Ceph users,
A recording of June's user + developer monthly meeting is now available.
Thank you to everyone who participated, asked questions, and shared
insights. Your feedback is crucial to the growth and health of the ceph
community!
Watch it here: https://youtu.be/7D9otll-kjA?feature=shared
On 24/06/2024 20:49, Matthew Vernon wrote:
On 19/06/2024 19:45, Adam King wrote:
I think this is at least partially a code bug in the rgw module. Where
...the code path seems to have a bunch of places it might raise an
exception; are those likely to result in some entry in a log-file? I've
On 19/06/2024 19:45, Adam King wrote:
I think this is at least partially a code bug in the rgw module. Where
...the code path seems to have a bunch of places it might raise an
exception; are those likely to result in some entry in a log-file? I've
not found anything, which is making working o
Thanks Anthony!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
* Subscription is now moderated
* The three worst spammers (you know who they are) have been removed
* I’ve deleted tens of thousands of crufty mail messages from the queue
The list should work normally now. Working on the backlog of held messages.
99% are bogus, but I want to be careful wrt ba
They seem to use the same few email address and then make new once. It
should be possible to block them once a day to at least cut down the volume
of emails but not completely block?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hi all,
I already sent an email abbout this this morning (in France) but
curiously, I never received it (blocked by my own email server From
University Paris-saclay ?), although hundred of spams, seemingly coming
from India, with sexual subject, have arrived in my email box (even if
marked as
Hello,
We have been experiencing a serious issue with our CephFS backup cluster
running quincy (version 17.2.7) on a RHEL8-derivative Linux kernel
(Alma8.9, 4.18.0-513.9.1 kernel) where our MDSes for our filesystem are
constantly in a "replay" or "replay(laggy)" state and keep crashing.
We h
16 matches
Mail list logo