Yes, this is all set up. It was working fine until after the problem
with the osd host that lost the cluster/sync network occured.
There are a few other VMs that keep running along fine without this
issue. I've restarted the problematic VM without success (that is,
creating a file works, but o
(sorry for duplicate emails)
This turns out to be a good question actually.
The cluster is running Quincy, 17.2.6.
The compute node that is running the VM is proxmox, version 7.4-3.
Supposedly this is fairly new, but the version of librbd1 claims to be
14.2.21 when I check with "apt list". We
I have uploaded the user + dev presentations to a more permanent location
on ceph.io: https://ceph.io/en/community/meetups/user-dev-archive/
On Mon, Sep 25, 2023 at 3:49 PM FastInfo Class
wrote:
> Thanks
> ___
> ceph-users mailing list -- ceph-users@ce
Many thanks for the clarification!
/Z
On Fri, 29 Sept 2023 at 16:43, Tyler Stachecki
wrote:
>
>
> On Fri, Sep 29, 2023, 9:40 AM Zakhar Kirpichenko wrote:
>
>> Thanks for the suggestion, Tyler! Do you think switching the progress
>> module off will have no material impact on the operation of th
Hello,
we removed an SSD cache tier and its pool.
The PGs for the pool do still exist.
The cluster is healthy.
The PGs are empty and they reside on the cache tier pool's SSDs.
We like to take out the disks but it is not possible. The cluster sees
the PGs and answers with a HEALTH_WARN.
Bec
On Fri, Sep 29, 2023, 9:40 AM Zakhar Kirpichenko wrote:
> Thanks for the suggestion, Tyler! Do you think switching the progress
> module off will have no material impact on the operation of the cluster?
>
It does not. It literally just tracks the completion rate of certain
actions so that it can
Thanks for the suggestion, Tyler! Do you think switching the progress
module off will have no material impact on the operation of the cluster?
/Z
On Fri, 29 Sept 2023 at 14:13, Tyler Stachecki
wrote:
> On Fri, Sep 29, 2023, 5:55 AM Zakhar Kirpichenko wrote:
>
>> Thank you, Eugen.
>>
>> Indeed
Dear all,
I have a problem that after an OSD host lost connection to the
sync/cluster rear network for many hours (the public network was
online), a test VM using RBD cant overwrite its files. I can create a
new file inside it just fine, but not overwrite it, the process just hangs.
The VM's
On Fri, Sep 29, 2023, 5:55 AM Zakhar Kirpichenko wrote:
> Thank you, Eugen.
>
> Indeed it looks like the progress module had some stale events from the
> time when we added new OSDs and set a specific number of PGs for pools,
> while the autoscaler tried to scale them down. Somehow the scale-down
Thank you, Eugen.
Indeed it looks like the progress module had some stale events from the
time when we added new OSDs and set a specific number of PGs for pools,
while the autoscaler tried to scale them down. Somehow the scale-down
events got stuck in the progress log, although these tasks have fi
Hi all,
I have a Ceph cluster on Quincy (17.2.6), with 3 pools (1 rbd and 1
CephFS volume), each configured with 3 replicas.
$ sudo ceph osd pool ls detail
pool 7 'cephfs_data_home' replicated size 3 min_size 2 crush_rule 1
object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode on
last_c
Hi,
this is from the mgr progress module [1]. I haven't played too much
with it yet, you can check out the output of 'ceph progress json',
maybe there are old events from a (failed) upgrade etc. You can reset
it with 'ceph progress clear', you could also turn it off ('ceph
progress off')
12 matches
Mail list logo