Hi Thomas,
I think you found a crash when using the lua "CopyFrom" field.
Opened a tracker: https://tracker.ceph.com/issues/59381
Will fix SASP and keep you updated.
Yuval
On Wed, Apr 5, 2023 at 6:58 PM Thomas Bennett wrote:
> Hi,
>
> We're currently testing out lua scripting in the Ceph Objec
Hi All,
Requesting any inputs around the issue raised.
Best Regards,
Lokendra
On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour,
wrote:
> Hi Team,
>
>
>
> We have a ceph cluster with 3 storage nodes:
>
> 1. storagenode1 - abcd:abcd:abcd::21
>
> 2. storagenode2 - abcd:abcd:abcd::22
>
> 3. storageno
Hi Jorge,
On 4/6/23 07:09, Jorge Garcia wrote:
We have a ceph cluster with a cephfs filesystem that we use mostly for
backups. When I do a "ceph -s" or a "ceph df", it reports lots of space:
data:
pools: 3 pools, 4104 pgs
objects: 1.09 G objects, 944 TiB
usage: 1.5 Pi
Hi,
we are using ceph version 17.2.5 on Ubuntu 22.04.1 LTS.
We deployed multi-mds (max_mds=4, plus standby-replay mds).
Currently we statically directory-pinned our user home directories (~50k).
The cephfs' root directory is pinned to '-1', ./homes is pinned to "0".
All user home directories be
Hi Adam, sorry for the very late reply.
I also found out that the "mgr/cephadm/upgrade_state" config key was the issue.
I actually just modified the config key and removed the unknown fields. This
made "ceph orch" commands work again. Great.
However, the downgrade process was quickly stuck on
Up :)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io