This is a common error on my system (Pacific).
It appears that there is internal confusion as to where the crash
support stuff lives - whether it's new-style (administered and under
/var/lib/ceph/fsid) or legacy style (/var/lib/ceph). One way to fake it
out was to manually created a minimal c
In addition to Robert's recommendations,
Remember to respect the update order (mgr->mon->(crash->)osd->mds->...)
Before everything was containerized, it was not recommended to have
different services on the same machine.
Le jeu. 13 juin 2024 à 19:37, Robert Sander
a écrit :
> On 13.06.24 18:
Hi,
On 13.06.24 20:29, Ranjan Ghosh wrote:
Other Ceph nodes run on 18.2 which came with the previous Ubuntu version.
I wonder if I could easily switch to Ceph packages or whether that would
cause even more problems.
Perhaps it's more advisable to wait until Ubuntu releases proper packages.
On 13.06.24 18:18, Ranjan Ghosh wrote:
What's more APT says I now got a Ceph Version
(19.2.0~git20240301.4c76c50-0ubuntu6) which doesn't even have any
official release notes:
Ubuntu 24.04 ships with that version from a git snapshot.
You have to ask Canonical why they did this.
I would not u
Hi,
El 2/3/24 a las 18:00, Tyler Stachecki escribió:
On 23.02.24 16:18, Christian Rohmann wrote:
I just noticed issues with ceph-crash using the Debian /Ubuntu
packages (package: ceph-base):
While the /var/lib/ceph/crash/posted folder is created by the package
install,
it's not properly chowne
> On 23.02.24 16:18, Christian Rohmann wrote:
> > I just noticed issues with ceph-crash using the Debian /Ubuntu
> > packages (package: ceph-base):
> >
> > While the /var/lib/ceph/crash/posted folder is created by the package
> > install,
> > it's not properly chowned to ceph:ceph by the postinst s
On 23.02.24 16:18, Christian Rohmann wrote:
I just noticed issues with ceph-crash using the Debian /Ubuntu
packages (package: ceph-base):
While the /var/lib/ceph/crash/posted folder is created by the package
install,
it's not properly chowned to ceph:ceph by the postinst script.
[...]
Yo
Hi,
there's a profile "crash" for that. In a lab setup with Nautilus
thre's one crash client with these caps:
admin:~ # ceph auth get client.crash
[client.crash]
key =
caps mgr = "allow profile crash"
caps mon = "allow profile crash"
On a Octopus cluster deployed
Is there a way to purge the crashs ?
For example is it safe and sufficient to delete everything in
/var/lib/ceph/crash on the nodes ?
F.
Le 30/04/2020 à 17:14, Paul Emmerich a écrit :
Best guess: the recovery process doesn't really stop, but it's just
that the mgr is dead and it no longer repo
Best guess: the recovery process doesn't really stop, but it's just that
the mgr is dead and it no longer reports the progress
And yeah, I can confirm that having a huge number of crash reports is a
problem (had a case where a monitoring script crashed due to a
radosgw-admin bug... lots of crash r
10 matches
Mail list logo