Hi,
I believe this question already has been answered on [1]. The failing
OSDs had an old monmap and were able to start after modifying their
config.
[1]
https://stackoverflow.com/questions/75366436/ceph-osd-authenticate-timed-out-after-node-restart
Zitat von tsmg...@gmail.com:
Releas
I've seen that happen when a rbd image or a snapshot is being removed
and you cancel the operation, specially if they are big or storage is
relatively slow. The rbd image will stay "half removed" in the pool.
Check "rbd ls -p POOL" vs "rbd ls -l -p POOL" outputs: the first may
have one or more
On 2/9/23 16:55, Frank Schilder wrote:
We moved a switch from one rack to another and after the switch came beck up,
the monitors frequently bitch about who is the alpha. How do I get them to
focus more on their daily duties again?
Just checking here, do you use xfs as monitor database file
Hi,
Yet another question about OSD memory usage ...
I have a test cluster running. When I do a ceph orch ps I see for my osd.11:
ceph orch ps --refresh
NAME HOSTPORTSSTATUSREFRESHED AGE MEM
USE MEM LIM VERSION IMAGE ID CONTAINER ID
osd.11
Am 2023-02-10 09:13, schrieb Victor Rodriguez:
I've seen that happen when a rbd image or a snapshot is being removed
and you cancel the operation, specially if they are big or storage is
relatively slow. The rbd image will stay "half removed" in the pool.
Check "rbd ls -p POOL" vs "rbd ls -l -p
Good morning everyone, been running a small Ceph cluster with Proxmox for a
while now and I’ve finally run across an issue I can’t find any information on.
I have a 3 node cluster with 9 Samsung PM983 960GB NVME drives running on a
dedicated 10gb network. RBD and CephFS performance have been gre
Hi Shawn,
To get another S3 upload baseline, I'd recommend doing some upload testing
with s5cmd [1].
1. https://github.com/peak/s5cmd
Matt
On Fri, Feb 10, 2023 at 9:38 AM Shawn Weeks
wrote:
> Good morning everyone, been running a small Ceph cluster with Proxmox for
> a while now and I’ve fin
> The problem I’m seeing is after setting up RadosGW I can only upload to “S3”
> at around 25MBs with the official AWS CLI. Using s3cmd is slightly better at
> around 45MB/s. I’m going directly to the RadosGW instance with no load
> balancers in between and no ssl enabled. Just trying to figure
With s5cmd and its defaults I got around 127MB/s for a single 16gb test file.
Is there any way to make s5cmd give feedback while it’s running. At first I
didn’t think it was working because it just sat there for a while.
Thanks
Shawn
On Feb 10, 2023, at 8:45 AM, Matt Benjamin wrote:
Hi Shawn,
With this options I still see around 38-40MB/s for my 16gb test file. So far my
testing is mostly synthetic, I’m going to be using some programs like GitLab
and Sonatype Nexus that store their data in object storage. At work I deal with
real S3 and regular see upload speeds in the 100s of MB/s s
For reference, with parallel writes using the S3 Go API (via hsbench:
https://github.com/markhpc/hsbench), I was recently doing about 600ish
MB/s to a single RGW instance from one client. RadosGW used around 3ish
HW threads from a 2016 era Xeon to do that. Didn't try single-file
tests in that
i deployed kolla-ansible & cephadm on virtual machines (kvm) .
My ceph cluster is on 3 vms with 12 vCPU each and 24gb of ram i used cephadm to
deploy ceph
ceph -s :
--
cluster:
id: a0e5ad36-a54c-11ed-9aea-5254008c2a3e
health: HEALTH_OK
services:
mon:
Hi,
I found this bug (won’t fix):
https://tracker.ceph.com/issues/51039
Which openstack version is this? With cephadm your ceph version is at
least Octopus, but it might be an older openstack version so the
backend can’t parse the newer mon-mgr target but expects only mon.
Zitat von Haith
sure!
ceph osd pool ls detail
https://privatebin.net/?85105578dd50f65f#4oNunvNfLoNbnqJwuXoWXrB1idt4zMGnBXdQ8Lkwor8p
i guess this needs some cleaning up regarding snapshots - could this be a
problem?
ceph osd crush rule dump
https://privatebin.net/?bd589bc9d7800dd3#3PFS3659qXqbxfaXSUcKot3ynmwRG2m
Hello everyone and sorry. Maybe someone has already faced this problem.
A day ago, we restored our Openshift cluster, however, at the moment, the PVCs
cannot connect to the pod. We looked at the status of the ceph and found that
our MDS were in standby mode, then found that the metadata was corr
Okay, so your applied crush rule has failure domain „room“ which you
have three of, but the third has no OSDs available. Check your osd
tree output, that’s why ceph fails to create a third replica. To
resolve this you can either change the rule to a different failure
domain (for example „ho
Hi,
Seems packages el9_quincy are available [1]
You can try
k
[1] https://download.ceph.com/rpm-quincy/el9/x86_64/
> On 10 Feb 2023, at 13:23, duluxoz wrote:
>
> Sorry if this was mentioned previously (I obviously missed it if it was) but
> can we upgrade a Ceph Quincy Host/Cluster from Rock
That's great - thanks.
Any idea if there are any upgrade instructions? Any "gotchas", etc?
I mean, having the new rpm is great for a fresh install, but we were
wanting to upgrade an existing cluster :-)
Cheers
Dulux-Oz
On 11/02/2023 15:02, Konstantin Shalygin wrote:
Hi,
Seems packages el
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
As I said in the initial post the servers are currently Rocky v8.6.
Obviously there's the migrate2rocky.sh and migrate2rocky9.sh scripts,
but I was wondering if there was anything "special" that we need to do
when running them with quincy ie any "gotchas"? :-)
On 11/02/2023 16:43, Konstantin
Sorry, let me qualify things / try to make them simpler:
When upgrading from a Rocky Linux 8.6 Server running Ceph-Quincy to
Rocky Linux 9.1 Server running Ceph-Quincy (ie an in-place upgrade of a
host-node in an existing cluster):
- What is the update procedure?
- Can we use the "standard(?
21 matches
Mail list logo