On Fri, Jul 22, 2022 at 12:47 AM Sridhar Seshasayee
wrote:
> I forgot to mention that the charts show CPU utilization when both client
> ops and recoveries are going on. The steep drop in CPU utilization is when
> client ops are stopped but recoveries are still going on.
>
It looks like the char
This is a hotfix release addressing two security vulnerabilities. We
recommend all users update to this release.
Notable Changes
---
* Users who were running OpenStack Manila to export native CephFS, who
upgraded their Ceph cluster from Nautilus (or earlier) to a later major
versi
This is a hotfix release addressing two security vulnerabilities. We
recommend all users update to this release.
Notable Changes
---
* Users who were running OpenStack Manila to export native CephFS, who
upgraded their Ceph cluster from Nautilus (or earlier) to a later major
versi
“Is there a usecase for sending a notification when the upload starts?” – Not
for me. Only having to watch for ObjectCreated:CompleteMultipartUpload and
ObjectCreated:Put works for me. Quincy here I come. Thanks!
--
Mark Selby
Sr Linux Administrator, The Voleon Group
mse...@voleon.com
I forgot to mention that the charts show CPU utilization when both client
ops and recoveries are going on. The steep drop in CPU utilization is when
client ops are stopped but recoveries are still going on.
___
ceph-users mailing list -- ceph-users@ceph.i
On Wed, Jul 20, 2022 at 4:03 AM Daniel Williams wrote:
> Do you think maybe you should issue an immediate change/patch/update to
> quincy to change the default to wpq? Given the cluster ending nature of the
> problem?
>
>
Hi Daniel / All,
The issue was root caused and the fix is currently made i
On Thu, Jul 21, 2022 at 11:42 AM Peter Lieven wrote:
>
> Am 19.07.22 um 17:57 schrieb Ilya Dryomov:
> > On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote:
> >> Am 24.06.22 um 16:13 schrieb Peter Lieven:
> >>> Am 23.06.22 um 12:59 schrieb Ilya Dryomov:
> On Thu, Jun 23, 2022 at 11:32 AM Pete
On Thu, Jul 21, 2022 at 4:24 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/56484
> Release Notes - https://github.com/ceph/ceph/pull/47198
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs, kcephfs,
You can list the objects in the pool and get their parent xattr, from there,
decode that attribute and see its location in the tree. Only the objects with
an all 0 suffix after the . should have a parent attribute.
This came from the mailing list some time ago:
rados --pool $pool_name getxattr
Hi,
just a heads up for others using Ubuntu and both ethernet bonding and
image cloning when provisioning ceph servers: mac address selection for
bond interfaces was changed to only depend on /etc/machine-id. Having
several machines sharing the same /etc/machine-id then wreaks havoc.
I encountere
Good deal.
I ended up going un-managed on my mom’s…. Had issues from time to time
where orch would decide I didn’t need where I pointed it to, and also
wouldn’t deploy them without the - - placement, labels, numbers etc
wouldn’t work either…
But been fine since unmanaged of course!
Glad your bac
Great, thank you.
Best,
Redo.
On Thu, Jul 21, 2022 at 2:01 PM Robert Reihs wrote:
> Bug Reported:
> https://tracker.ceph.com/issues/56660
> Best
> Robert Reihs
>
> On Tue, Jul 19, 2022 at 11:44 AM Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>> Great, thanks for sharing your solu
Bug Reported:
https://tracker.ceph.com/issues/56660
Best
Robert Reihs
On Tue, Jul 19, 2022 at 11:44 AM Redouane Kachach Elhichou <
rkach...@redhat.com> wrote:
> Great, thanks for sharing your solution.
>
> It would be great if you can open a tracker describing the issue so it
> could be fixed lat
Hi
I tried
ceph orch daemon rm mon.ml2rsn01 --force
Error EINVAL: Unable to find daemon(s) ['mon.ml2rsn01']
with no success.
But this reminded me that it may be possible to apply a complete new set
of configs via
ceph orch apply mon --placement="..."
and that worked out. I hope this creates
You try ceph orch daemon rm already?
On Thu, Jul 21, 2022 at 3:58 AM Dominik Baack <
dominik.ba...@cs.uni-dortmund.de> wrote:
> Hi,
>
> after removing a node from our cluster we are currently on cleanup:
>
> OSDs are removed and cluster is (mostly) healthy again
>
> mds were changed
>
>
> But we
Hi,
after removing a node from our cluster we are currently on cleanup:
OSDs are removed and cluster is (mostly) healthy again
mds were changed
But we still have one trailing error:
CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): mon
with
ceph orch ls
mon
Am 19.07.22 um 17:57 schrieb Ilya Dryomov:
On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote:
Am 24.06.22 um 16:13 schrieb Peter Lieven:
Am 23.06.22 um 12:59 schrieb Ilya Dryomov:
On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven wrote:
Am 22.06.22 um 15:46 schrieb Josh Baergen:
Hey Peter,
Hi Mark,
Starting from quincy, we send 1 notification,
"ObjectCreated:CompleteMultipartUpload", when the upload is complete. See:
https://docs.ceph.com/en/quincy/radosgw/s3-notification-compatibility/
We don't send the "ObjectCreated:Post" notification when the upload starts
as it will be confusing
18 matches
Mail list logo