Hello to everyone
When I use this command to see bucket usage
radosgw-admin bucket stats --bucket=
It work only when the owner of the bucket is activated
How to see the usage even when the owner is suspended ?
Here is 2 exemple, one with the owner activated et the other one with owner
suspende
Hello to everyone
Our ceph cluster is healthy and everything seems to go well but we have a
lot of num_strays
ceph tell mds.0 perf dump | grep stray
"num_strays": 1990574,
"num_strays_delayed": 0,
"num_strays_enqueuing": 0,
"strays_created": 3,
"strays_enqu
I am using ceph pacific (16.2.5)
Does anyone have an idea about my issues ?
Thanks again to everyone
All the best
Arnaud
Le mar. 1 mars 2022 à 01:04, Arnaud M a écrit :
> Hello to everyone
>
> Our ceph cluster is healthy and everything seems to go well but we have a
> lot o
of deleted files.
> You need to delete the snapshots, or "reintegrate" the hardlinks by
> recursively listing the relevant files.
>
> BTW, in pacific there isn't a big problem with accumulating lots of
> stray files. (Before pacific there was a default limit of 1M stra
Hello to everyone :)
Just some question about filesystem scrubbing
In this documentation it is said that scrub will help admin check
consistency of filesystem:
https://docs.ceph.com/en/latest/cephfs/scrub/
So my questions are:
Is filesystem scrubbing mandatory ?
How often should I scrub the wh
h.io/hyperkitty/list/ceph-users@ceph.io/thread/2NT55RUMD33KLGQCDZ74WINPPQ6WN6CW/
>
> And about the crash, it could be related to
> https://tracker.ceph.com/issues/51824
>
> Cheers, dan
>
>
> On Tue, Mar 1, 2022 at 11:30 AM Arnaud M
> wrote:
> >
> > Hello Dan
&
mars 2022 à 23:26, Arnaud M a écrit :
> Hello to everyone :)
>
> Just some question about filesystem scrubbing
>
> In this documentation it is said that scrub will help admin check
> consistency of filesystem:
>
> https://docs.ceph.com/en/latest/cephfs/scrub/
>
&g
r 6, 2022 at 3:57 AM Arnaud M
> wrote:
>
>> Hello to everyone :)
>>
>> Just some question about filesystem scrubbing
>>
>> In this documentation it is said that scrub will help admin check
>> consistency of filesystem:
>>
>> https://docs.
Hello Linkriver
I might have an issue close to your
Can you tell us if your strays dirs are full ?
What does this command output to you ?
ceph tell mds.0 perf dump | grep strays
Does the value change over time ?
All the best
Arnaud
Le mer. 16 mars 2022 à 15:35, Linkriver Technology <
techno
Hello
is swap enabled on your host ? Is swap used ?
For our cluster we tend to allocate enough ram and disable swap
Maybe the reboot of your host re-activated swap ?
Try to disable swap and see if it help
All the best
Arnaud
Le mar. 29 mars 2022 à 23:41, David Orman a écrit :
> We're defin
Hello
I will speak about cephfs because it what I am working on
Of course you can do some kind of rsync or rclone between two cephfs
clusters but at petabytes scales it will be really slow and cost a lot !
There is another approach that we tested successfully (only on test not in
prod)
We creat
Hello to everyone
I have looked on the internet but couldn't find an answer.
Do you know the maximum size of a ceph filesystem ? Not the max size of a
single file but the limit of the whole filesystem ?
For example a quick search on zfs on google output :
A ZFS file system can store up to *256 qu
hdfs which I
> know/worked with more than 50.000 HDDs without problems.
>
> On Mon, Jun 20, 2022 at 10:46 AM Arnaud M
> wrote:
> >
> > Hello to everyone
> >
> > I have looked on the internet but couldn't find an answer.
> > Do you know the maximum size of
22 um 09:45 schrieb Arnaud M:
>
> > A ZFS file system can store up to *256 quadrillion zettabytes* (ZB).
>
> How would a storage system look like in reality that could hold such an
> amount of data?
>
> Regards
> --
> Robert Sander
> Heinlein Consulting GmbH
>
Hello to everyone
I have a ceph cluster currently serving cephfs.
The size of the ceph filesystem is around 1 Po.
1 Active mds and 1 Standby-replay
I do not have a lot of cephfs clients for now 5 but it may increase to 20
or 30.
Here is some output
Rank | State | Daemon
15 matches
Mail list logo