[ceph-users] Can't clear UPGRADE_REDEPLOY_DAEMON after fix

2021-03-08 Thread Samy Ascha
Hi! I was upgrading from 15.2.8 to 15.2.9 via `ceph orch upgrade` (Ubuntu Bionic). One OSD seemed to have failed to upgrade, so I just redeployed it. The OSD is up/in, but this warning is not clearing: UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.3 on host ceph-osd4 failed. It seems the warni

[ceph-users] Re: Can't clear UPGRADE_REDEPLOY_DAEMON after fix

2021-03-08 Thread Samy Ascha
Ok, I just went on with my day. Performed an action that required mgr restart/failover, and the warning was gone.. I guess it was just stuck as a state in the then running daemon. Have a good day :) Samy Tobias: It just said 'nothing in progress' > On 8 Mar 2021, at 11:42, Samy

[ceph-users] Podman pull error 'access denied'

2021-06-17 Thread Samy Ascha
Hi! I have a problem after starting to upgrade to 16.2.4, from 15.2.13. I started the upgrade and it successfully redeployed 2 out of 3 mgr daemon containers. The third failed to upgrade and Cephadm started retrying to upgrade it forever. The only way I could stop this was to disable the cephad

[ceph-users] Write i/o in CephFS metadata pool

2020-01-29 Thread Samy Ascha
Hi! I've been running CephFS for a while now and ever since setting it up, I've seen unexpectedly large write i/o on the CephFS metadata pool. The filesystem is otherwise stable and I'm seeing no usage issues. I'm in a read-intensive environment, from the clients' perspective and throughput fo

[ceph-users] Re: Write i/o in CephFS metadata pool

2020-01-29 Thread Samy Ascha
; > Thank you, > > Dominic L. Hilsbos, MBA > Director - Information Technology > Perform Air International Inc. > dhils...@performair.com > www.PerformAir.com > > > > -Original Message- > From: Samy Ascha [mailto:s...@xel.nl] > Sent: Wednesday, Januar

[ceph-users] Re: Write i/o in CephFS metadata pool

2020-02-04 Thread Samy Ascha
> On 2 Feb 2020, at 12:45, Patrick Donnelly wrote: > > On Wed, Jan 29, 2020 at 1:25 AM Samy Ascha wrote: >> >> Hi! >> >> I've been running CephFS for a while now and ever since setting it up, I've >> seen unexpectedly large write i/o on t

[ceph-users] Re: Write i/o in CephFS metadata pool

2020-02-06 Thread Samy Ascha
> On 4 Feb 2020, at 16:14, Samy Ascha wrote: > > > >> On 2 Feb 2020, at 12:45, Patrick Donnelly wrote: >> >> On Wed, Jan 29, 2020 at 1:25 AM Samy Ascha wrote: >>> >>> Hi! >>> >>> I've been running CephFS for a wh

[ceph-users] Re: Write i/o in CephFS metadata pool

2020-02-10 Thread Samy Ascha
> On 6 Feb 2020, at 11:23, Stefan Kooman wrote: > >> Hi! >> >> I've confirmed that the write IO to the metadata pool is coming form active >> MDSes. >> >> I'm experiencing very poor write performance on clients and I would like to >> see if there's anything I can do to optimise the perform

[ceph-users] Re: cephfs slow, howto investigate and tune mds configuration?

2020-02-11 Thread Samy Ascha
> On 11 Feb 2020, at 14:53, Marc Roos wrote: > > > Say I think my cephfs is slow when I rsync to it, slower than it used to > be. First of all, I do not get why it reads so much data. I assume the > file attributes need to come from the mds server, so the rsync backup > should mostly cause