My test with a single-host-cluster (virtual machine) finished after
around 20 hours. I removed all purged_snap keys from the mon and it
actually started again (wasn't sure if I could have expected that). Is
that a valid approach in order to reduce the mon store size? Or can it
be dangerous?
Hi all,
we had a network outage tonight (power loss) and restored network in the
morning. All OSDs were running during this period. After restoring network
peering hell broke loose and the cluster has a hard time coming back up again.
OSDs get marked down all the time and come back later. Peeri
On 7/12/23 09:53, Frank Schilder wrote:
Hi all,
we had a network outage tonight (power loss) and restored network in the
morning. All OSDs were running during this period. After restoring network
peering hell broke loose and the cluster has a hard time coming back up again.
OSDs get marked do
Hi all,
one problem solved, another coming up. For everyone ending up in the same
situation, the trick seems to be to get all OSDs marked up and then allow
recovery. Steps to take:
- set noout, nodown, norebalance, norecover
- wait patiently until all OSDs are shown as up
- unset norebalance, n
Hi all,
I'm facing a strange problem, where from time to time there are no
accessible S3 objects.
I've found similar issues [1] , [2] but our clusters have already upgraded
to the latest Pacific version.
I have noted in the bug report https://tracker.ceph.com/issues/61716
RGW logs [3]
Maybe so
Answering myself for posteriority. The rebalancing list disappeared after
waiting even longer. Might just have been an MGR that needed to catch up.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
S
The docs aren't necessarily structured that way, i.e. there isn't a 17.2.6 docs
site as such. We try to document changes in behavior in sync with code, but
don't currently have a process to ensure that a given docs build corresponds
exactly to a given dot release. In fact we sometimes go back
For the sake of the archive and future readers: I think we now have an
explanation for this issue.
Our cloud is one of the few remaining OpenStack deploys which predates
the use of UUIDs for OpenStack tenant names; instead our project ids are
typically the same as project names. Radosgw checks
On Wed, Jul 12, 2023 at 1:26 AM Frank Schilder wrote:
Hi all,
>
> one problem solved, another coming up. For everyone ending up in the same
> situation, the trick seems to be to get all OSDs marked up and then allow
> recovery. Steps to take:
>
> - set noout, nodown, norebalance, norecover
> - wa
On 6/30/23 18:36, Yuri Weinstein wrote:
This RC has gone thru partial testing due to issues we are
experiencing in the sepia lab.
Please try it out and report any issues you encounter. Happy testing!
If I install cephadm from package, 18.1.2 on ubuntu focal in my case,
cepadm usages the ceph-
Can you elaborate on how you installed cephadm?
When I pull from quay.io/ceph/ceph:v18.1.2, I see the version v18.1.2
podman run -it quay.io/ceph/ceph:v18.1.2
Trying to pull quay.io/ceph/ceph:v18.1.2...
Getting image source signatures
Copying blob f3a0532868dc done
Copying blob 9ba8dbcf96c4 done
C
On 7/12/23 23:21, Yuri Weinstein wrote:
Can you elaborate on how you installed cephadm?
Add ceph repo (mirror):
cat /etc/apt/sources.list.d/ceph.list
deb http://ceph.download.bit.nl/debian-18.1.2 focal main
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key
add -
apt upd
Thanks for the report - this is being fixed in
https://github.com/ceph/ceph/pull/52343
On Wed, Jul 12, 2023 at 2:53 PM Stefan Kooman wrote:
> On 7/12/23 23:21, Yuri Weinstein wrote:
> > Can you elaborate on how you installed cephadm?
>
> Add ceph repo (mirror):
> cat /etc/apt/sources.list.d/ceph
Hi Anthony,
> The docs aren't necessarily structured that way, i.e. there isn't a 17.2.6
> docs site as such. We try to document changes in behavior in sync with code,
> but don't currently have a process to ensure that a given docs build
> corresponds exactly to a given dot release. In fact
Hi, all and Igor
Have a case: https://tracker.ceph.com/issues/61973, I'm not sure if it's
related to this PR(https://github.com/ceph/ceph/pull/38902), but it looks
very similar.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an emai
15 matches
Mail list logo