I think “ceph-volume lvm activate —all” should do it.
Weiwen Hu
> 在 2022年6月10日,14:34,Flemming Frandsen 写道:
>
> Hi, this is somewhat embarrassing, but one of my colleagues fat fingered an
> ansible rule and managed to wipe out /etc/systemd/system on all of our ceph
> hosts.
>
> The cluster is
Hi,
you can either use 'rbd du' command:
control01:~ # rbd --id cinder du images/01b01349-a11c-489c-8349-4c5be9523c58
NAME PROVISIONED USED
01b01349-a11c-489c-8349-4c5be9523c58@snap2 GiB 2 GiB
01b01349-a11c-489c-8349-4c5be9523c58 2 GiB
Hmm, does that also create the mon, mgr and mds units?
On Fri, 10 Jun 2022 at 09:06, 胡 玮文 wrote:
> I think “ceph-volume lvm activate —all” should do it.
>
> Weiwen Hu
>
> > 在 2022年6月10日,14:34,Flemming Frandsen 写道:
> >
> > Hi, this is somewhat embarrassing, but one of my colleagues fat
> finger
No. But these daemon are almost stateless, you should be able to delete
everything then re-deploy them easily with whatever means you do it the first
time. Or maybe just take this chance and try re-deploy them with cephadm?
在 2022年6月10日,16:23,Flemming Frandsen 写道:
Hmm, does that also create th
Hi,
On 10.06.22 10:23, Flemming Frandsen wrote:
Hmm, does that also create the mon, mgr and mds units?
The actual unit files for these services are located in
/lib/systemd/system, not /etc/systemd.
You need to recreate the _instances_ for the units, e.g. by running
systemctl enable ceph-
Hello list,
my ceph cluster was upgraded from nautilus to octopus last October,
causing snaptrims
to overload OSDs so I had to disable them (bluefs_buffered_io=false|true
didn't help).
Now I've copied data elsewhere and removed all clients and try to fix
the cluster.
Scraping it and starting
Hi,
could you share more cluster details and what your workload is?
ceph -s
ceph osd df tree
ceph orch ls
ceph osd pools ls detail
How big are your PGs?
Zitat von Mara Sophie Grosch :
Hi,
good catch with the way too low memory target, I wanted to configure 1
GiB not 1 MiB. I'm aware it's lo
Did you check the mempools?
ceph daemon osd.X dump_mempools
This will tell you how much memory is consumed by different components of
the OSD.
Finger in the air, your ram might be consumed by the pg_log.
If osd_pglog from the dump_mempools output is big, then you can lower the
values of the rela
Dear All,
We are testing Quincy on a new large cluster, "ceph osd pool
autoscale-status" fails if we add a pool that uses a custom crush rule
using a specific device class, but it's fine if we don't specify the class:
[root@wilma-s1 ~]# ceph -v
ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca8
Hi,
is your new pool configured as a cache-tier? The option you're trying
to set is a cache-tier option [1]. Could the old pool have been a
cache pool in the past so it still has this option set?
[1]
https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#configuring-a-cache-tier
Are you thinking it might be a permutation of:
https://tracker.ceph.com/issues/53729 ? There are some posts in it to check
for the issue, #53 and #65 had a few potential ways to check.
On Fri, Jun 10, 2022 at 5:32 AM Marius Leustean
wrote:
> Did you check the mempools?
>
> ceph daemon osd.X dump
Hi Max,
Thank you - the mgr log complains about overlapping roots, so this
indeed the cause :)
2022-06-10T13:56:37.669+0100 7f641f7e3700 0 [pg_autoscaler ERROR root]
pool 14 has overlapping roots: {-1, -2}
2022-06-10T13:56:37.675+0100 7f641f7e3700 0 [pg_autoscaler WARNING
root] pool 4 cont
Ok, thanks!
--Pardhiv
On Fri, Jun 10, 2022 at 2:46 AM Eneko Lacunza wrote:
> Hi Pardhiv,
>
> I don't recall anything unusual, just follow upgrade procedures outlined
> in each release.
>
> Cheers
>
> El 9/6/22 a las 20:08, Pardhiv Karri escribió:
>
> Awesome, thank you, Eneko!
>
> Would you mi
Hi,
I have 3 ceph clusters (deployed using ceph-ansible individually) joined
together in a rgw multisite implementation. My question is on the ordering of
upgrades for the multisite implementation as I've not found any documentation
that outlines this. Do you need to start with the primary si
We aren't building for Centos 9 yet, so I guess the python dependency
declarations don't work with the versions in that release.
I've put updating to 9 on the agenda for the next CLT.
(Do note that we don't test upstream packages against RHEL, so if
Centos Stream does something which doesn't match
On Wed, Jun 8, 2022 at 12:36 AM Andreas Teuchert
wrote:
>
>
> Hello,
>
> we're currently evaluating cephfs-mirror.
>
> We have two data centers with one Ceph cluster in each DC. For now, the
> Ceph clusters are only used for CephFS. On each cluster we have one FS
> that contains a directory for cu
Thank you very much, along with the "ceph-volume lvm activate —all" we now
have a working solution on the test environment.
On Fri, 10 Jun 2022 at 11:21, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 10.06.22 10:23, Flemming Frandsen wrote:
> > Hmm, does th
Oh!
That actually seems to be the problem, at least when I run the command
given in #53 I get a lot of dups for quite some PGs - and that's after
most of those are recovered enough to start the OSD again.
I did a `ceph-objectstore-tool --op trim-pg-log` run with the fixed
version and that OSD now
18 matches
Mail list logo