Hi Torkil,
On 12/16/21 08:01, Torkil Svensgaard wrote:
> I set up one peer with rx-tx, and it seems to replicate as it should,
> but the site-a status looks a little odd. Why down+unknown and status
> not found? Because of rx-tx peer with only one way active?
If you don't have any mirror from s
On Mon, Dec 13, 2021 at 06:18:55PM +, Zoran Bošnjak wrote:
> I am using "ubuntu 20.04" and I am trying to install "ceph pacific" version
> with "cephadm".
>
> Are there any instructions available about using "cephadm bootstrap" and
> other related commands in an airgap environment (that is:
Hi all,
Cephfs quota work really well for me.
A cool feature is that if one mounts a folder, which has quotas enabled, then
the mountpoint will show as a partition of quota size and how much is used
(e.g. with df command), nice!
Now, I want to access the usage information of folders with quotas
On 12/16/21 12:45 PM, Jesper Lykkegaard Karlsen wrote:
> Hi all,
>
> Cephfs quota work really well for me.
> A cool feature is that if one mounts a folder, which has quotas enabled, then
> the mountpoint will show as a partition of quota size and how much is used
> (e.g. with df command), nice!
Hi Jasper,
On 16.12.21 12:45, Jesper Lykkegaard Karlsen wrote:
Now, I want to access the usage information of folders with quotas from root
level of the cephfs.
I have failed to find this information through getfattr commands, only quota
limits are shown here, and du-command on individual fold
Thanks everybody,
That was a quick answer.
getfattr -n ceph.dir.rbytes $DIR
Was the answer that worked for me. So getfattr was the solution after all.
Is there some way I can display all attributes, without knowing them in
forehand?
I have tried:
getfattr -d -m 'ceph.*' $DIR
which gives me
Just tested:
getfattr -n ceph.dir.rbytes $DIR
Works on CentOS 7, but not on Ubuntu 18.04 eighter.
Weird?
Best,
Jesper
--
Jesper Lykkegaard Karlsen
Scientific Computing
Centre for Structural Biology
Department of Molecular Biology and Genetics
Aarhus University
Gustav Wied
Woops, wrong copy/pasta:
getfattr -n ceph.dir.rbytes $DIR
works on all distributions I have tested.
It is:
getfattr -d -m 'ceph.*' $DIR
that does not work on Rocky Linux 8, Ubuntu 18.04, but works on CentOS 7.
Best,
Jesper
--
Jesper Lykkegaard Karlsen
Scientific Comput
Hi Florian, hi Guillaume
Am 16.12.21 um 14:18 schrieb Florian Haas:
> Hello everyone,
>
> my colleagues and I just ran into an interesting situation updating
> our Ceph training course. That course's labs cover deploying a
> Nautilus cluster with ceph-ansible, upgrading it to Octopus (also with
>
On 16.12.21 14:18, Florian Haas wrote:
Or is this simply a
bug somewhere in the orchestrator that would need fixing?
You need to check if this cephadm version still tries to pull from the
docker hub or if it already pulls from quay.io.
The latest image are only available on quay.io.
You nee
On 16.12.21 14:46, Robert Sander wrote:
On 16.12.21 14:18, Florian Haas wrote:
Or is this simply a
bug somewhere in the orchestrator that would need fixing?
You need to check if this cephadm version still tries to pull from the
docker hub or if it already pulls from quay.io.
You can check it
On 16.12.21 14:58, Florian Haas wrote:
Yes, we are aware that that's how you specify the image *on upgrade.*
The question was about how to avoid the silent *downgrade* of RGWs and
MDSs during ceph orch apply, so that a subsequent point-release upgrade
(within Octopus) for those services is no lo
On 16.12.21 15:08, Florian Haas wrote:
To clarify: I'd like to understand how to tell "ceph orch apply" to tell
Cephadm to use a specific version, while staying on Octopus.
ceph orch apply takes the container image from the "container_image"
config setting. And this can only be changed by "ce
Hi,
We still suffer from a growing and growing monstore. Now we are at 80gb
and it just does not stop.
Any pointer would be appreciated. I opened bug:
https://tracker.ceph.com/issues/53485
Since we did not suffer this problem in the past, was some logging added
to mons in the recent releases
Hi Daniel,
have you tried to disable this via "clog_to_monitors" parameter? Please
be aware of https://tracker.ceph.com/issues/48946 though.
Also curious what's your Ceph version?
Thanks,
Igor
On 12/16/2021 6:39 PM, Daniel Poelzleithner wrote:
Hi,
We still suffer from a growing and grow
Hi all,
We had a situation where 1 drive failed at the same time as a node. This
caused files in cephfs not not be readable and 'ceph status' to display
the error message "pgs not active".
Our cluster is either 3 replicas or equivalent EC (k2m2). Eventually
all the PGs became active and not
Am 16.12.21 um 16:56 schrieb Florian Haas:
So for a 15.2.5 cluster that's being converted to Cephadm, Cephadm
should use quay.io from the get-go. And it does, but only for "cephadm
adopt".
It does because you specify the container image on the command line in
the adoption process.
When you
Hi,
I think I found the culpit:
I changed the paxos debug level to 20 and fond this in mon store log:
2021-12-16T18:35:07.814+0100 7fec66e79700 20
mon.server6@0(leader).paxosservice(logm 5064..29067286) maybe_trim
5064~29067236
2021-12-16T18:35:07.814+0100 7fec66e79700 10
mon.serve
To answer my own question.
It seems Frank Schilder asked a similar question two years ago:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/6ENI42ZMHTTP2OONBRD7FDP7LQBC4P2E/
listxattr() was aparrently removed and not much have happen since then it seems.
Anyway, I just made my own
Brilliant, thanks Jean-François
Best,
Jesper
--
Jesper Lykkegaard Karlsen
Scientific Computing
Centre for Structural Biology
Department of Molecular Biology and Genetics
Aarhus University
Gustav Wieds Vej 10
8000 Aarhus C
E-mail: je...@mbg.au.dk
Tlf:+45 50906203
Not to spam, but to make it output prettier, one can also separate the number
from the byte-size prefix.
numfmt --to=iec --suffix=B --padding=7 $(getfattr --only-values -n
ceph.dir.rbytes $1 2>/dev/nul) | sed -r 's/([0-9])([a-zA-Z])/\1 \2/g;
s/([a-zA-Z])([0-9])/\1 \2/g'
//Jesper
--
Hi,
What is your Ceph version?
May be your affected by [1]? The fix for this is a restart the mon's leader,
and after 5 minutes mon store will be trimmed
[1] https://tracker.ceph.com/issues/48212
Thanks,
k
> On 16 Dec 2021, at 18:39, Daniel Poelzleithner wrote:
>
> Hi,
>
> We still suffer
22 matches
Mail list logo