Can you check what "ceph versions" reports?
On Fri, Apr 29, 2022 at 9:15 AM Dominique Ramaekers
wrote:
>
> Hi,
>
> I never got a reply on my question. I can't seem to find how I upgrade the
> cephadm shell docker container.
>
> Any ideas?
>
> Greetings,
>
> Dominique.
>
>
> > -Oorspronkelijk
Hi,
I never got a reply on my question. I can't seem to find how I upgrade the
cephadm shell docker container.
Any ideas?
Greetings,
Dominique.
> -Oorspronkelijk bericht-
> Van: Dominique Ramaekers
> Verzonden: woensdag 27 april 2022 11:24
> Aan: ceph-users@ceph.io
> Onderwerp: [cep
On Fri, Apr 22, 2022 at 03:39:04PM +0100, Luís Henriques wrote:
> On Thu, Apr 21, 2022 at 08:53:48PM +, Ryan Taylor wrote:
> >
> > Hi Luís,
> >
> > I did just that:
> >
> > [fedora@cephtest ~]$ sudo ./debug.sh
> ...
> > [94831.006412] ceph: release inode 3bb3ccb2 dir file
> > 000
Hi,
I just tried again on a Quincy 17.2.0.
Same procedure, same problem.
I just wonder if nobody else sees that problem?
Ciao, Uli
> On 18. 03 2022, at 12:18, Ulrich Klein wrote:
>
> I tried it on a mini-cluster (4 Raspberries) with 16.2.7.
> Same procedure, same effect. I just can’t get rid
Am 29.04.22 um 10:57 schrieb Александр Пивушков:
Hello, is there any theoretical possibility to use ceph on two servers? It is
necessary that ceph works when one of any of the servers fails. Each server
only has 2 SSDs for ceph.
With only two servers I would look at DRBD, not Ceph.
Regards
Hello,
I run a ceph Nautilus 14.2.22 cluster with 144 OSDs. In order to be able
to see if a disk has hardware trouble and might fail soon I activated
health management. The cluster is running on Ubuntu 18.04 and the first
task was to install a newer smartctl version. I used smartctl 7.0.
Dev