Dear Ceph community,
we are in the curious situation that typical orchestrator queries
provide wrong or outdated information about different services.
E.g. `ceph orch ls` will report wrong numbers on active services.
Or `ceph orch ps` reports many OSDs as "starting" and many services with
an old
On Wed, Jun 29, 2022 at 1:06 PM Frank Schilder wrote:
> Hi,
>
> did you wait for PG creation and peering to finish after setting pg_num
> and pgp_num? They should be right on the value you set and not lower.
>
Yes, only thing going on was backfill. It's still just slowly expanding pg
and pgp nums
Hi!
Just try to Google data_digest_mismatch_oi
On old maillist archives couple of threads with same problem
k
Sent from my iPhone
> On 29 Jun 2022, at 13:54, Lennart van Gijtenbeek | Routz
> wrote:
>
> Hello Ceph community,
>
>
> I hope you could help me with an issue we are experiencing
Hi
You can deploy 3+2 or 3+5 mons, not 3+1
k
Sent from my iPhone
> On 28 Jun 2022, at 21:39, Vladimir Brik
> wrote:
>
> Hello
>
> I have a ceph cluster with 3 mon servers that resides at a facility that
> experiences significant outages once or twice a year. Is it possible mons
> will not
Trying to resolve this, at first I tried to pause the cephadm processes ('ceph config-key set
mgr/cephadm/pause true') which did not lead anywhere but loss of connectivity: how do you "resume"?
Does not exist anywhere in the documentation!
Actually, there are quite many things in Ceph that you ca
Hi,
you can check how much your MDS is currently using:
ceph daemon mds. cache status
Does it already scratch your limit? I usually start with lower values
if it's difficult to determine how much it will actually use and
increase it if necessary.
Zitat von Arnaud M :
Hello to everyone
I’d say one thing to keep in mind is the higher you have your cache, and
the more that is currently consumed, the LONGER it will take in the event
the reply has to take over…
While standby-reply does help to improve takeover times, its not
significant if there is a lot of clients with a lot of ope
On Wed, Jun 29, 2022 at 4:42 PM Stefan Kooman wrote:
> On 6/29/22 11:21, Curt wrote:
> > On Wed, Jun 29, 2022 at 1:06 PM Frank Schilder wrote:
> >
> >> Hi,
> >>
> >> did you wait for PG creation and peering to finish after setting pg_num
> >> and pgp_num? They should be right on the value you se
Hi,
CephFS currently only supports POSIX ACLs.
These can be used when re-exporting the filesystem via Samba for SMB
clients and via nfs-kernel-server for NFSv3 clients.
NFS-Ganesha in version 4.0 from Ceph 17 supports POSIX ACLs for the Ceph
FSAL. But only on the backend, the frontend still
Hi Guys,
I am in the upgrade proccess from mimic to nautilus.
The first step was to upgrade one cephmon, but after that this cephmon can not
rejoin the cluster I see this at logs:
2022-06-29 15:54:48.200 7fd3d015f1c0 0 ceph version 14.2.22
(ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (
The log output you pasted suggests that an oom killer is responsible
for the failure, can you confirm that? Are other services located on
that node that use too much RAM?
Zitat von Iban Cabrillo :
Hi Guys,
I am in the upgrade proccess from mimic to nautilus.
The first step was to upgrade on
Hey all,
just want to note that I'm also looking for some kind of way to
restart/reset/refresh orchestrator.
But in my case it's not the hosts but the services which are presumably
wrongly reported and outdated:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/NHEVEM3ESJYXZ4LPJ24B
[ceph pacific 16.2.9]
When creating a NFS export using "ceph nfs export apply ... -i export.json" for
a subdirectory of /cephfs, does the subdir that you wish to export need to be
pre-created or will ceph (or ganesha) create it for you?
I'm trying to create an "/shared" directory in a cephfs
Am 29.06.22 um 18:23 schrieb Wyll Ingersoll:
If I manually create the directory prior to applying the export spec, it does
work.
I think that's the way to go.
>
But it seems that ganesha is trying to create it for me so I'm wondering
how to make that work.
The orchestrator creates one cep
Hi Stefan,
Thank you, that definitely helped. I bumped it to 20% for now and that's
giving me around 124 PGs backfilling at 187 MiB/s, 47 Objects/s. I'll see
how that runs and then increase it a bit more if the cluster handles it ok.
Do you think it's worth enabling scrubbing while backfilling?
Hi Eugen,
There is only ceph-mgr and ceph-mon on this node (working fine for years
with versions <14)
Jun 29 16:08:42 cephmon03 systemd: ceph-mon@cephmon03.service failed.
Jun 29 16:16:36 cephmo
On Wed, Jun 29, 2022 at 9:55 PM Stefan Kooman wrote:
> On 6/29/22 19:34, Curt wrote:
> > Hi Stefan,
> >
> > Thank you, that definitely helped. I bumped it to 20% for now and that's
> > giving me around 124 PGs backfilling at 187 MiB/s, 47 Objects/s. I'll
> > see how that runs and then increase i
17 matches
Mail list logo