Dear all;
We have 3-node cluster that has two OSDs on separate nodes, each with
wal on NVMe. It's been running fine for quite some time, albeit under
very light load. This week, we moved from package-based Octopus to
container-based ditto (15.2.13, all on Debian stable). Within a few
hours
D with more verbose (debug) output and share that?
Does your cluster really have only two OSDs? Are you running it with
size 2 pools?
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EUFDKK3HEA5DPTUVJ5LBNQSWAKZH5ZM7/
[2]
http://lists.ceph.com/pipermail/ceph-users-ce
ishes; Johan
On 2021-07-27 23:48, Eugen Block wrote:
Alright, it's great that you could fix it!
In my one-node test cluster (Pacific) I see this smartctl version:
[ceph: root@pacific /]# rpm -q smartmontools
smartmontools-7.1-1.el8.x86_64
Zitat von Johan Hattne :
Thanks a lot, Eugen!
On 2023-12-22 03:28, Robert Sander wrote:
Hi,
On 22.12.23 11:41, Albert Shih wrote:
for n in 1-100
Put off line osd on server n
Uninstall docker on server n
Install podman on server n
redeploy on server n
end
Yep, that's basically the procedure.
But first try it on a test cluste
Dear all;
Up until a few hours ago, I had a seemingly normally-behaving cluster
(Quincy, 17.2.5) with 36 OSDs, evenly distributed across 3 of its 6
nodes. The cluster is only used for CephFS and the only non-standard
configuration I can think of is that I had 2 active MDSs, but only 1
standb
ost).
// J
On 2023-03-31 15:37, c...@elchaka.de wrote:
Need to know some more about your cluster...
Ceph -s
Ceph osd df tree
Replica or ec?
...
Perhaps this can give us some insight
Mehmet
Am 31. März 2023 18:08:38 MESZ schrieb Johan Hattne :
Dear all;
Up until a few hours ago, I had
those rack
buckets were sitting next to the default root as opposed to under it.
Now that's fixed, and the cluster is backfilling remapped PGs.
// J
On 2023-03-31 16:01, Johan Hattne wrote:
Here goes:
# ceph -s
cluster:
id: e1327a10-8b8c-11ed-88b9-3cecef0e3946
health: HEALTH_
-1 0 root default" is a bit strange
Am 1. April 2023 01:01:39 MESZ schrieb Johan Hattne :
Here goes:
# ceph -s
cluster:
id: e1327a10-8b8c-11ed-88b9-3cecef0e3946
health: HEALTH_OK
services:
mon: 5 daemons, quorum
bcgonen-a,bcgonen-b,bcgo