I will try to check this case.
From a first check I can see that the client has multiple (12) multipart
uploads for the same key, started at the exactly same time.
But the objects are uploaded to only one “upload id” others are empty so the
number of Parts is equal to the number of parts of the
I just jacked in a completely new, clean server and I've been trying to
get a Ceph (Pacific) monitor running on it.
The "ceph orch daemon add" appears to install all/most of what's
necessary, but when the monitor starts, it shuts down immediately, and
in the manner of Ceph containers immediately e
Back when I was battline Octopus, I had problems getting ganesha's NFS
to work reliably. I resolved this by doing a direct (ceph) mount on my
desktop machine instead of an NFS mount.
I've since been plagued by ceph "laggy OSD" complaints that appear to
be due to a non-responsive client and I'm sus
So the orchestrator is working and you have a working ceph cluster?
Can you share the output of:
ceph orch ls mon
If the orchestrator expects only one mon and you deploy another
manually via daemon add it can be removed. Try using a mon.yaml file
instead which contains the designated mon ho
My €0.02 for what it's worth(less).
I've been doing RBD-based VMs under libvirt with no problem. In that
particular case, the ceph RBD base images are being overlaid cloud-
style with a an instance-specific qcow2 image and the RBD is just part
of my storage pools.
For a physical machine, I'd prob
On Tue, Feb 6, 2024 at 12:09 PM Tim Holloway wrote:
>
> Back when I was battline Octopus, I had problems getting ganesha's NFS
> to work reliably. I resolved this by doing a direct (ceph) mount on my
> desktop machine instead of an NFS mount.
>
> I've since been plagued by ceph "laggy OSD" complai
ceph orch ls
NAME PORTSRUNNING REFRESHED AGE
PLACEMENT
alertmanager ?:9093,9094 1/1 3m ago 8M
count:1
crash
Yeah, you have the „count:1“ in there, that’s why your manually added
daemons are rejected. Try my suggestion with a mon.yaml.
Zitat von Tim Holloway :
ceph orch ls
NAME PORTSRUNNING REFRESHED AGE
PLACEMENT
alertmanager ?:9093,9094
Ah, yes. Much better.
There are definitely some warts om there, as the monitor count was 1
but there were 2 monitors listed running.
I've mostly avoided docs that reference ceph config files and yaml
configs because the online docs are (as I've whined before) not always
trustworthy and often cont
I confirmed selinux is disabled on all existing and new hosts. Likewise,
python3.6 is installed on all as well. (3.9.16 on RL8, 3.9.18 on RL9).
I am running 16.2.12 on all containers, so it may be worth updating to
16.2.14 to ensure I'm on the latest Pacific release.
Gary
On 2024-02-05 08:
Unfortunately, it looks like the exact error text has already rolled
off my logs. Earlier today something jammed up and I restarted ceph,
which returned everything to a healthy state.
But I'll take that as an indicator that ceph isn't a good match for
sleeper systems. Fortunately, ganesha NFS is n
Just FYI, I've seen this on CentOS systems as well, and I'm not even
sure that it was just for Ceph. Maybe some stuff like Ansible.
I THINK you can safely ignore that message or alternatively that it's
such an easy fix that senility has already driven it from my mind.
Tim
On Tue, 2024-02-06
12 matches
Mail list logo