Dear gents,
to get handy with cephadm upgrade path and in general (we heavily use old
style "ceph-deploy" Octopus based production clusters), we decided to do
some tests with a vanilla cluster running 15.2.11 based on Centos8 on top
of vSphere. Deployment of Octopus cluster runs very well and we
Thank you for the command. I successfully stopped and started the mgr daemon on
that node but still the version number on the ceph dashboard is stuck on the
old version 15.2.10. On that node I also have the mon daemon running, should I
also restart mon?
‐‐‐ Original Message ‐‐‐
On Thurs
I'm running some specialized routing in my environment such that CIDR detection is failing when trying to add monitors. Is there a way to specify the monitor IP address to bind to when adding a monitor if "public_network = 0.0.0.0/0"? Setting "public_network = 0.0.0.0/0" is the only way I could fin
Hello,
Is it only me that's getting Internal error when trying to create issues in the
bugtracker for some day or two?
https://tracker.ceph.com/issues/new
Best regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Can you collect the output of this command on all 4 servers while your
test is running:
iostat -mtxy 1
This should show how busy the CPUs are as well as how busy each drive is.
On Thu, Apr 29, 2021 at 7:52 AM Schmid, Michael
wrote:
>
> Hello folks,
>
> I am new to ceph and at the moment I am d
At the moment I'm using "ceph orch mon apply mon1,mon2,mon3" and hostnames "mon1,mon2,mon3" on all nodes resolve to the IP address I would like the monitor to bind to.
mon1 is the initial bootstrap monitor which is being created with "--mon-ip" (It in turn binds to the appropriate IP).
Is there
Hello All,I was running 15.2.8 via cephadm on docker Ubuntu 20.04I just
attempted to upgrade to 16.2.1 via the automated method, it successfully
upgraded the mon/mgr/mds and some OSD's, however it then failed on an OSD and
hasn't been able to pass even after stopping and restarting the upgrade.I
On 29/04/2021 11:52 pm, Schmid, Michael wrote:
I am new to ceph and at the moment I am doing some performance tests with a 4
node ceph-cluster (pacific, 16.2.1).
Ceph doesn't do well with small numbers, 4 OSD's is really marginal.
Your latency isn't crash hot either. What size are you running
Have you checked for disk failure? dmesg, smartctl etc. ?
Zitat von "Robert W. Eckert" :
I worked through that workflow- but it seems like the one monitor
will run for a while - anywhere from an hour to a day, then just stop.
This machine is running on AMD hardware (3600X CPU on X570 chipse
Nothing is appearing in dmesg. Smartctl shows no issues either.
I did find this issue https://tracker.ceph.com/issues/24968 which showed
something that may be memory related, so I will try testing that next.
-Original Message-
From: Eugen Block
Sent: Friday, April 30, 2021 1:36 PM
I'm trying to set up a new ceph cluster, and I've hit a bit of a blank.
I started off with centos7 and cephadm. Worked fine to a point, except I
had to upgrade podman but it mostly worked with octopus.
Since this is a fresh cluster and hence no data at risk, I decided to jump
straight into Pacifi
Hello,
I noticed a couple unanswered questions on this topic from a while back.
It seems, however, worth asking whether adjusting either or both of the
subject attributes could improve performance with large HDD OSDs (mine are
12TB SAS).
In the previous posts on this topic the writers indicated t
I've had good luck with the Ubuntu LTS releases - no need to add extra
repos. 20.04 uses Octopus.
On Fri, Apr 30, 2021 at 1:14 PM Peter Childs wrote:
>
> I'm trying to set up a new ceph cluster, and I've hit a bit of a blank.
>
> I started off with centos7 and cephadm. Worked fine to a point, ex
13 matches
Mail list logo