On Sun, Jan 16, 2022 at 3:54 PM Patrick Donnelly wrote:
>
> Hi Dan,
>
> On Fri, Jan 14, 2022 at 6:32 AM Dan van der Ster wrote:
> > We had this long ago related to a user generating lots of hard links.
> > Snapshots will have a similar effect.
> > (in these cases, if a user deletes the original f
Thank you, Manuel.
Please refer the tracker for updates.
On Fri, Jan 14, 2022 at 7:54 PM Manuel Holtgrewe wrote:
>
> Dear Venky,
>
> I cleaned the old auth entries after things did not work out and found my
> workaround. I then started fresh and things worked.
>
> Afterwards it turned out that
Am 13.01.22 um 09:19 schrieb Szabo, Istvan (Agoda):
But in your case the election is successful to the other mgr, am I correct? So
the dash always up for you? Not sure for me why not, maybe I need to disable it
really :/
Has disabling the prometheus module prevented further crashes?
Best,
On Sun, Jan 16, 2022 at 8:28 PM Frank Schilder wrote:
>
> I seem to have a problem. I cannot dump the mds tree:
>
> [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mdsdir/stray0'
> root inode is not in cache
> [root@ceph-08 ~]# ceph daemon mds.ceph-08 dump tree '~mds0/stray0'
> root inode is
On Fri, Jan 14, 2022 at 4:54 PM Frank Schilder wrote:
>
> Hi Venky,
>
> thanks for your reply. I think the first type of messages was a race
> condition. A user was running rm and find on the same folder at the same
> time. The second type of message (duplicate inode in stray) might point to an
Hello,
All my pools on the cluster are replicated (x3).
I purged some OSD (after I stopped them) and remove the disks from the
servers, and now I have 4 PGs in stale+undersized+degraded+peered.
Reduced data availability: 4 pgs inactive, 4 pgs stale
pg 1.561 is stuck stale for 39m, current st
Hi,
Your cluster was healthy before purging osd?
How much time did you wait between stoping osd and purging them?
Étienne
> On 17 Jan 2022, at 15:24, Rafael Diaz Maurin
> wrote:
>
> Hello,
>
> All my pools on the cluster are replicated (x3).
>
> I purged some OSD (after I stopped them) and re
You removed OSD from 3 different hosts?
I’m surprised it was healthy as purging stopped OSD is only going to ‘clean'
crushmap.
Is there any recover in progress?
-
Etienne Menguy
etienne.men...@croit.io
> On 17 Jan 2022, at 15:41, Rafael Diaz Maurin
> wrote:
>
> Hello,
>
> Le 17/01/2022
Hey all-
I tried sending this to the list on the 15th but it seems it was eaten
somewhere without a bounce as it never seems to have made it to even the
archive. So trying again with an alternate email and plaintext.
Original message follows:
#
Hello, all!
I'll be the f
Hi E Taka,
There's already a report of that issue in 16.2.5 (
https://tracker.ceph.com/issues/51611), stating that it didn't happen in
16.2.3 (so a regression then), but we couldn't reproduce it so far.
I just tried creating a regular fresh cephfs filesystem (1 MDS), a
directory inside it (via ce
Hi all,
For OpenStack Manila CI we are using Ubuntu 20.04 LTS with the community
PPAs [0].
For those users in RHEL based distros, we allow them to use the packages in
download.ceph.com [1], but that is not being actively tested by our CI.
IIRC there are no builds for Ubuntu, CentOS or Fedora in
11 matches
Mail list logo