Hello,
My instalation of ceph is:
6 Node of Proxmox with 2 disk (8 TB) on the every node.
I make 12 OSD from all 8TB disk.
Ceph installed is - ceph version 15.2.14 octopus (stable)
I installed 6 monitor (all runnig) and 6 Manager 1 of them runnig
(*active*) all others is *standby*.
In cep
We are using ceph-ansible to deploy our ceph clusters, Octopus version and
passing required parameters to ceph.conf . We are unable send parameter
to disable pg auto scaling for existing default pools and also new pools.
global osd pool default pg autoscale mode: "off"
osd pool default pg auto
Also note that you need an odd number of MONs to be able to form a
quorum. So I would recommend to remove one MON to have 5.
Zitat von Eneko Lacunza :
Hi,
El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instalation of ceph is:
6 Node of Proxmox with 2 disk (8 TB) on the every node.
I
Den ons 27 okt. 2021 kl 11:10 skrev Eugen Block :
>
> Also note that you need an odd number of MONs to be able to form a
> quorum. So I would recommend to remove one MON to have 5.
Well, you need to have a distinct majority, so using 6 is not better
than 5, but not worse either.
Both will let the
Hi,
27.10.2021 12:03, Eneko Lacunza пишет:
Hi,
El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instalation of ceph is:
6 Node of Proxmox with 2 disk (8 TB) on the every node.
I make 12 OSD from all 8TB disk.
Ceph installed is - ceph version 15.2.14 octopus (stable)
I installed 6 monit
I need to destroy it or just stopped ?
27.10.2021 12:09, Eugen Block пишет:
Also note that you need an odd number of MONs to be able to form a
quorum. So I would recommend to remove one MON to have 5.
Zitat von Eneko Lacunza :
Hi,
El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instal
Hi list,
We have a lot of failed instances when reboot host after upgrade cpeh client
to pacific.
On 160 instances failed represent 15-20%
We have been impacted by this treads too when upgrade cluster:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/C7FIBFYP32KSWJXL2XSJJKXNMAS
I need to destroy it or just stopped ?
Destroy it so you only have 5 existing MONs in the cluster.
Zitat von Сергей Цаболов :
I need to destroy it or just stopped ?
27.10.2021 12:09, Eugen Block пишет:
Also note that you need an odd number of MONs to be able to form a
quorum. So I would r
Thank you!
I destroy it.
27.10.2021 13:18, Eugen Block пишет:
I need to destroy it or just stopped ?
Destroy it so you only have 5 existing MONs in the cluster.
Zitat von Сергей Цаболов :
I need to destroy it or just stopped ?
27.10.2021 12:09, Eugen Block пишет:
Also note that you need
Hi Lanore,
as I've already mentioned at my last reply to
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/XDMISQC74Z67RXP2PJHERARJ7KT2ADW4/
there is a bug in BlueStore's quick-fix/repair in OSDs upgraded to
Pacific. Looks like omaps' records are getting improper keys...
I'm wo
Hi Edward,
You don't need to worry about keeping those certs persistently, the Ceph
Dashboard does that for you (they're persisted inside the ceph-mon KV
store). You just need to ensure that the paths you provide are reachable to
the ceph-mgr daemon. And I agree: that's a bit tricky with container
Hi, thanks for reply. I haven't received your answer because of my subscription
delivered set to None: Fixed
You are wright. We have provisionned other OSD to fix our cluster at the moment.
waiting for fix
thanks
Ronan Lanore
Ce message et toutes les pièces joi
Hi,
El 27/10/21 a las 9:55, Сергей Цаболов escribió:
My instalation of ceph is:
6 Node of Proxmox with 2 disk (8 TB) on the every node.
I make 12 OSD from all 8TB disk.
Ceph installed is - ceph version 15.2.14 octopus (stable)
I installed 6 monitor (all runnig) and 6 Manager 1 of them runni
Sorry, I saw your answer too late, you're right of course. I
internalised the odd number of MONs for quorum, sometimes it's
difficult to get these things out. ;-)
Zitat von Janne Johansson :
Den ons 27 okt. 2021 kl 11:10 skrev Eugen Block :
Also note that you need an odd number of MONs t
Thank you for the reply. Even if there’s a good reason for the CLI tool to not
send the contents of the files, I feel like the docs should at least have “this
is how we recommend you handle this if you’re using a containerized (e.g.
cephadm) deployment”.
Speaking of which, do you have any spec
DocuBetter meetings are now cancelled in perpetuity.
These meetings are cancelled because they are sparsely attended, and the
few people who do attend them are in more frequent contact with Zac through
channels that are not the DocuBetter meeting.
Zac is available to field documentation-related r
On 10/25/21 4:45 AM, Stefan Kooman wrote:
> On 10/20/21 21:57, David Galloway wrote:
>> We're happy to announce the 15th backport release in the Octopus series.
>> We recommend users to update to this release.
>
> ...
>
>> Getting Ceph
>>
>> * Git at git://github.com/ceph/ceph.git
OSD flapping was due to ports blocked by firewall.
while mounting the file system, the directory structure shows csi volumes
subfolder folders
as in /tmp/cephFS/csi/csi-vol-/container-name-log
is there a way to not show the csi volumes in the path to the container log
as an example:
/tmp/cephFS
Hi,
We have a couple of buckets used for media transcoding scratch space that
have huge lists under "ver" and "master_ver" when doing a "radosgw-admin
bucket stats".
Can anyone tell me what these version lists are for? I saw that Casey said
they're not related to bucket versioning, but I'm just
Is there any command or log I can provide a sample from that would help to
pinpoint the issue? The 119 of 120 OSDs are working correctly by all
accounts, but I am just unable to have the bring the last one fully online.
Thank you,
On Tue, Oct 26, 2021 at 3:59 PM Marco Pizzolo
wrote:
> Thanks f
(oops, forgot to reply-all)
On Wed, Oct 27, 2021 at 12:58 PM Trey Palmer wrote:
>
> Hi,
>
> We have a couple of buckets used for media transcoding scratch space that
> have huge lists under "ver" and "master_ver" when doing a "radosgw-admin
> bucket stats".
>
> Can anyone tell me what these versi
Hi Marco, the log lines are truncated. I recommend you to send the logs to a
file rather than copying from terminal:
cephadm logs --name osd.13 > osd.13.log
I see “read stalled” in the log. Just a guess, can you check the kernel logs
and the SMART info to see if there is something wrong with th
Thanks Hu Weiwen,
These hosts and drives are perhaps 2 months old or so, and this is the
first cluster we build on them so I was not anticipating a drive issue
already.
The smartmontools show:
root@:~# smartctl -H /dev/sdag
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.11.0-38-generic] (local bu
Jonas, would you be interested in joining one of our performance
meetings and presenting some of your work there? Seems like we can
have a good discussion about further improvements to the balancer.
Thanks,
Neha
On Mon, Oct 25, 2021 at 11:39 AM Josh Salomon wrote:
>
> Hi Jonas,
>
> I have some c
24 matches
Mail list logo