Hi,
I have the same problem with octopus 15.2.8
check if you have snapshots of storage pool, check if you have snapshots of
object:
rados -p default.rgw.buckets.data lssnap
rados -p default.rgw.buckets.data listsnaps object_name
In my case only object have snaps. I found way to delete it but
On Fri, Jan 29, 2021 at 9:18 AM Schoonjans, Tom (RFI,RAL,-) <
tom.schoonj...@rfi.ac.uk> wrote:
> Hi Yuval,
>
>
> What do I need to do if I want to switch to using a different exchange on
> the RabbitMQ endpoint? Or change the amqp-ack-level option that was used?
> Would you expect the same problem
Hi, I have tried to enable RGW management in the dashboard.
The dashboard works fine, and I tried to add a new system user:
radosgw-admin user create --uid=some-user --display-name="User for
dashboard" --system
and set the accesskey and secret:
ceph dashboard set-rgw-api-access-key access-key
ce
Thanks for your suggestion. I will have a look !
But I am a bit surprised that the "official" balancer seems so unefficient !
F.
Le 28/01/2021 à 12:00, Jonas Jelten a écrit :
Hi!
We also suffer heavily from this so I wrote a custom balancer which yields much
better results:
https://github.co
Hi Francois,
What is the output of `ceph balancer status` ?
Also, can you increase the debug_mgr to 4/5 then share the log file of
the active mgr?
Best,
Dan
On Fri, Jan 29, 2021 at 10:54 AM Francois Legrand wrote:
>
> Thanks for your suggestion. I will have a look !
>
> But I am a bit surprise
Hi,
Unfortunately this doesn’t seem to be the same problem we’re experiencing. We
have no snapshots on pool or on specific object:
rados lssnap -p default.rgw.buckets.data
0 snaps
rados -p default.rgw.buckets.data listsnaps
5a5c812a---.4811659.83__shadow_anon_backup__xx
On 1/28/21 5:10 AM, Konstantin Shalygin wrote:
Interesting, thanks.
Do you know tracker ticket for this?
No, not even sure if there is a tracker for this.
Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ce
Hi,
Am 26.01.21 um 14:58 schrieb Jens Hyllegaard (Soft Design A/S):
>
> I am not sure why this is not working, but I am now unable to use the ceph
> command on any of my hosts.
>
> When I try to launch ceph, I get the following response:
> [errno 13] RADOS permission denied (error connecting to
Hi.
I think you are right. I suspect that somehow the
/etc/ceph/ceph.client.admin.keyring file disappeared on all the hosts.
I ended up reinstalling the cluster.
Thank you for your input.
Regards
Jens
-Original Message-
From: Robert Sander
Sent: 29. januar 2021 13:29
To: ceph-users
Dear cephers,
I was doing some maintenance yesterday involving shutdown-power up cycles of
ceph servers. With the last server I run into a problem. The server runs an MDS
and a couple of OSDs. After reboot, the MDS joined the MDS cluster without
problems, but the OSDs didn't come up. This was 1
This is a odd one. I don't hit it all the time so I don't think its expected
behavior.
Sometimes I have no issues enabling rbd-mirror snapshot mode on a rbd when its
in use by a KVM VM. Other times I hit the following error, the only way I can
get around it is to power down the KVM VM.
root@
On Fri, Jan 29, 2021 at 9:34 AM Adam Boyhan wrote:
>
> This is a odd one. I don't hit it all the time so I don't think its expected
> behavior.
>
> Sometimes I have no issues enabling rbd-mirror snapshot mode on a rbd when
> its in use by a KVM VM. Other times I hit the following error, the only
That makes sense. Appreciate it.
From: "Jason Dillaman"
To: "adamb"
Cc: "ceph-users"
Sent: Friday, January 29, 2021 9:39:28 AM
Subject: Re: [ceph-users] Unable to enable RBD-Mirror Snapshot on image when VM
is using RBD
On Fri, Jan 29, 2021 at 9:34 AM Adam Boyhan wrote:
>
> This is
Hi -
We keep on getting errors like these on specific OSDs with Nautilus (14.2.16):
2021-01-29 06:14:19.174 7fbeaab92c00 -1 osd.8 12568359 unable to obtain
rotating service keys; retrying
2021-01-29 06:14:49.173 7fbeaab92c00 0 monclient: wait_auth_rotating timed out
after 30
2021-01-29 06:14:49
Spot on, once I upgraded the client to 15.2.8 I was able to enable rbd-mirror
snapshots and create them while the VM was running.
However, I have noticed that I am also able to break replication when the rbd
is being used by a KVM VM.
While writing data in the VM, I took a rbd snapshot, then
I have been hammering on my setup non stop with only layering, exclusive-lock
and deep-flatten enabled. Still can't reproduce the issue. I think this is the
ticket.
From: "adamb"
To: "dillaman"
Cc: "ceph-users" , "Matt Wilder"
Sent: Thursday, January 28, 2021 3:37:15 PM
Subject: [ceph-u
We've been watching our MONs go unresponsive with a saturated 10GbE NIC. The
problem seems to be aggravated by peering. We were shrinking the PG count on
one of our large pools and it was happening a bunch. Once that finished it
seemed to calm down. Yesterday I had an OSD go down and as it w
We are currently running 3 MONs. When one goes into silly town the others get
wedged and won't respond well. I don't think more MONs would solve that... but
I'm not sure.
--
Paul Mezzanini
Sr Systems Administrator / Engineer, Research Computing
Information & Technology Services
Finance & Admin
Hi Poul,
thanks for sharing. I have the MONs on 2x10G bonded active-active. They don't
manage to saturate 10G, but the CPU core is overloaded.
How many MONs do you have? I believe I have never seen more than 2 to be in
this state for an extended period of time. My plan is to go from 3 to 5, whi
Hi Dan,
Here is the output of ceph balancer status :
/ceph balancer status//
//{//
// "last_optimize_duration": "0:00:00.074965", //
// "plans": [], //
// "mode": "upmap", //
// "active": true, //
// "optimize_result": "Unable to find further optimization, or
pool(s) pg_num is de
Thanks, and thanks for the log file OTR which simply showed:
2021-01-29 23:17:32.567 7f6155cae700 4 mgr[balancer] prepared 0/10 changes
This indeed means that balancer believes those pools are all balanced
according to the config (which you have set to the defaults).
Could you please also s
Hi Szabo,
For what it's worth, I have a two clusters in a multisite that has never
appeared to be synced either, but have never found a single object that
can't be found in both clusters.
There are always at least a few recovering shards, while the "data sync
source" is always "syncing" with
Hi,
I’ve never seen in our multisite sync status healthy output, almost all the
sync shards are recovering.
What can I do with recovering shards?
We have 1 realm, 1 zonegroup and inside the zonegroup we have 3 zones in 3
different geo location.
We are using octopus 15.2.7 for bucket sync with
2 things I forgot to mention which might be interesting, we have only 2 bucket
at the moment, one is presharded to 9000 shards, the other presharded to 24000
shards (different users)
> On 2021. Jan 30., at 10:02, Szabo, Istvan (Agoda)
> wrote:
>
> Hi,
>
> I’ve never seen in our multisite sync
24 matches
Mail list logo