[ceph-users] Re: Help needed please ! Filesystem became read-only !

2024-07-14 Thread Olli Rajala
Hi, I believe our KL studio has hit this same bug after deleting a pool that was used only for testing. So, is there any procedure to get rid of those bad journal events and get the mds back to rw state? Thanks, --- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2024-07-04 Thread Olli Rajala
io is almost zero. And those points where the write io drops were times when I dopped the mds caches. --- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi --- On Wed, Jul 3, 2024 at 7:49 PM Venky Shankar wrote: > Hi Olli, > > On Tue, Ju

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2024-07-02 Thread Olli Rajala
t; and "mlocate" packages. The default config (on Ubuntu atleast) of updatedb for "mlocate" does skip scanning cephfs filesystems but not so for "locate" which happily ventures onto all of your cephfs mounts :| --- Olli Rajala - Lead TD Anima V

[ceph-users] Re: cephfs-data-scan orphan objects while mds active?

2024-05-22 Thread Olli Rajala
0 0 70158 86 TiB 234691728 117 TiB 0 B 0 B Is there some way to force these to get trimmed? tnx, ------- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi --- On Fri, May 17, 2024 at 6:48 AM Gregory Farnum wrote: > It

[ceph-users] Re: cephfs-data-scan orphan objects while mds active?

2024-05-14 Thread Olli Rajala
r all the objects in the pool and delete all objects without the tag and older than one year Is there any tooling to do such an operation? Any risks or flawed logic there? ...or any other ways to discover and get rid of these objects? Cheers! --- Olli Rajala - Lead TD Anima

[ceph-users] cephfs-data-scan orphan objects while mds active?

2024-05-13 Thread Olli Rajala
safe to run cephfs-data-scan scan_extents and scan_inodes while the fs is online? Does it help if I give a custom tag while forward scrubbing and then use --filter-tag on the backward scans? ...or is there some other way to check and cleanup orphans? tnx, --- Olli Rajala -

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-12-14 Thread Olli Rajala
Tnx, --- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi --- On Sun, Dec 11, 2022 at 9:07 PM Olli Rajala wrote: > Hi, > > I'm still totally lost with this issue. And now lately I've had a couple > of incidents where the write bw has suddenly ju

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-12-11 Thread Olli Rajala
appreciated. Is there any tool or procedure to safely check or rebuild the mds data? ...if this behaviour could be caused by some hidden issue with the data itself. Tnx, ------- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi --- On Fri, Nov 11, 2022 a

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-11-10 Thread Olli Rajala
Tnx, --- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi --- On Thu, Nov 10, 2022 at 8:18 AM Venky Shankar wrote: > Hi Olli, > > On Mon, Oct 17, 2022 at 1:08 PM Olli Rajala wrote: > > > > Hi Patrick, > > > >

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-11-08 Thread Olli Rajala
-file: 30f9b38b-a62c-44bb-9e00-53edf483a415 Tnx! --- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi --- On Mon, Nov 7, 2022 at 2:30 PM Milind Changire wrote: > maybe, > >- use the top program to look at a threaded listing of the ceph-md

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-11-07 Thread Olli Rajala
ng up the cache would show any bw increase by running "tree" at the root of one of the mounts and it didn't affect anything at the time. So basically the cache has been fully saturated all this time now. Boggled, --- Olli Rajala - Lead TD Anima Vit

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-11-05 Thread Olli Rajala
started already when I did Octopus->Pacific upgrade... Cheers, ------- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi --- On Mon, Oct 24, 2022 at 9:36 PM Olli Rajala wrote: > I tried my luck and upgraded to 17.2.4 but unfortunately t

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-10-24 Thread Olli Rajala
on or mechanism could cause such high idle write io? I've tried to fiddle a bit with some of the mds cache trim and memory settings but I haven't noticed any effect there. Any pointers appreciated. Cheers, ------- Olli Rajala - Lead TD Anima Vitae Lt

[ceph-users] Re: CephFS constant high write I/O to the metadata pool

2022-10-17 Thread Olli Rajala
don't have a clue what to focus on and how to interpret that. Here's a perf dump if you or anyone could make something out of that: https://gist.github.com/olliRJL/43c10173aafd82be22c080a9cd28e673 Tnx! o. --- Olli Rajala - Lead TD Anima Vitae Ltd. www.anima.fi

[ceph-users] CephFS constant high write I/O to the metadata pool

2022-10-13 Thread Olli Rajala
Hi, I'm seeing constant 25-50MB/s writes to the metadata pool even when all clients and the cluster is idling and in clean state. This surely can't be normal? There's no apparent issues with the performance of the cluster but this write rate seems excessive and I don't know where to look for the