Dear Ceph users,
Our CephFS is not releasing/freeing up space after deleting hundreds of
terabytes of data.
By now, this drives us in a "nearfull" osd/pool situation and thus
throttles IO.
We are on ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5)
quincy (stable).
Recently, we m
Le 23/10/2023 à 03:04, 544463...@qq.com a écrit :
I think you can try to roll back this part of the python code and wait for your
good news :)
Not so easy 😕
[root@e9865d9a7f41 ceph]# git revert
4fc6bc394dffaf3ad375ff29cbb0a3eb9e4dbefc
Auto-merging src/ceph-volume/ceph_volume/tests/util/te
Hi,
I recently moved from a manual Ceph deployment using Saltstack to a
hybrid of Saltstack and cephadm / ceph orch. We are provisioning our
Ceph hosts using a stateless PXE RAM root, so I definitely need
Saltstack to bootstrap at least the Ceph APT repository and the MON/MGR
deployment. Afte
Regarding the crash in quincy-p2p (tracked in
https://tracker.ceph.com/issues/63257), @Prashant Dhange
and I evaluated it, and we've concluded it isn't a
blocker for 17.2.7.
So, quincy-p2p is approved.
Thanks,
Laura
On Sat, Oct 21, 2023 at 12:27 AM Venky Shankar wrote:
> Hi Yuri,
>
> On Fri
If no one has anything else left, we have all issues resolved and
ready for the 17.2.7 release
On Mon, Oct 23, 2023 at 8:12 AM Laura Flores wrote:
>
> Regarding the crash in quincy-p2p (tracked in
> https://tracker.ceph.com/issues/63257), @Prashant Dhange
> and I evaluated it, and we've conclude
Hey all,
My Ceph cluster is managed mostly by cephadm / ceph orch to avoid
circular dependencies between in our infrastructure deployment. Our
RadosGW endpoints, however, are managed by Kubernetes, since it provides
proper load balancing and service health checks.
This leaves me in the unsat
It seems that the only way to modify the code is manually ...
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io