Janne,
Thank you match, now is ok.
04.02.2022 17:46, Janne Johansson пишет:
Den fre 4 feb. 2022 kl 15:31 skrev Сергей Цаболов :
Hi everyone,
One question below is my ceph osd tree, like you see some osd the
REWEIGHT is less the default 1.0
* 2hdd7.27739 osd.2
Hi everyone,
One question below is my ceph osd tree, like you see some osd the
REWEIGHT is less the default 1.0
Advise me how I change the REWEIGHT on this osd?
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 106.43005 root default
-13 14.55478
Hello to all,
I read the documentation
https://docs.ceph.com/en/latest/rados/operations/placement-groups/
In this part the page placement-groups this part:
*TARGET RATIO*, if present, is the ratio of storage that the
administrator has specified that they expect this pool to consume
relative
me my steps command is correct ?
Or I need change some steps?
Step 6,7,8 looks a lot like "ceph osd purge", so unless you have a
very old installation, replace them with one command.
Apart from that it looks ok.
--
-
С уважением
Сергей Цаболов,
Системный адм
Hello to all.
I have Proxmox cluster with 7 node.
Storage for VM disk and others pool data is on ceph version 15.2.15
(4b7a17f73998a0b4d9bd233cda1db482107e5908) octopus (stable)
On pve-7 I have 10 OSD and for test I want to remove 2 osd from this node.
I write some steps command how I remove
n some one to suggest me what I can to check in Ceph ?
Thanks.
27.10.2021 12:34, Сергей Цаболов пишет:
Hi,
27.10.2021 12:03, Eneko Lacunza пишет:
Hi,
El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instalation of ceph is:
6 Node of Proxmox with 2 disk (8 TB) on the every node.
I make
Thank you!
I destroy it.
27.10.2021 13:18, Eugen Block пишет:
I need to destroy it or just stopped ?
Destroy it so you only have 5 existing MONs in the cluster.
Zitat von Сергей Цаболов :
I need to destroy it or just stopped ?
27.10.2021 12:09, Eugen Block пишет:
Also note that you need
I need to destroy it or just stopped ?
27.10.2021 12:09, Eugen Block пишет:
Also note that you need an odd number of MONs to be able to form a
quorum. So I would recommend to remove one MON to have 5.
Zitat von Eneko Lacunza :
Hi,
El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My
Hi,
27.10.2021 12:03, Eneko Lacunza пишет:
Hi,
El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instalation of ceph is:
6 Node of Proxmox with 2 disk (8 TB) on the every node.
I make 12 OSD from all 8TB disk.
Ceph installed is - ceph version 15.2.14 octopus (stable)
I installed 6
Hello,
My instalation of ceph is:
6 Node of Proxmox with 2 disk (8 TB) on the every node.
I make 12 OSD from all 8TB disk.
Ceph installed is - ceph version 15.2.14 octopus (stable)
I installed 6 monitor (all runnig) and 6 Manager 1 of them runnig
(*active*) all others is *standby*.
In cep
of PGs.
Regards,
Yury.
On Tue, 26 Oct 2021, 20:20 Сергей Цаболов, wrote:
Hello to community!
I need to advise about SPECIFYING EXPECTED POOL SIZE
In the page
https://docs.ceph.com/en/latest/rados/operations/placement-groups/ I
found Part
SPECIFYING EXPECTED POOL SIZE and can to increase some
Hello to community!
I need to advise about SPECIFYING EXPECTED POOL SIZE
In the page
https://docs.ceph.com/en/latest/rados/operations/placement-groups/ I
found Part
SPECIFYING EXPECTED POOL SIZE and can to increase some of my pool to
have more of TB
command is ceph osd pool set mypool tar
12 matches
Mail list logo