[ceph-users] cephadm upgrade from v15.11 to pacific fails all the times

2021-04-30 Thread Ackermann, Christoph
Dear gents, to get handy with cephadm upgrade path and in general (we heavily use old style "ceph-deploy" Octopus based production clusters), we decided to do some tests with a vanilla cluster running 15.2.11 based on Centos8 on top of vSphere. Deployment of Octopus cluster runs very well and we

[ceph-users] Re: Host ceph version in dashboard incorrect after upgrade

2021-04-30 Thread mabi
Thank you for the command. I successfully stopped and started the mgr daemon on that node but still the version number on the ceph dashboard is stuck on the old version 15.2.10. On that node I also have the mon daemon running, should I also restart mon? ‐‐‐ Original Message ‐‐‐ On Thurs

[ceph-users] Specify monitor IP when CIDR detection fails

2021-04-30 Thread Stephen Smith6
I'm running some specialized routing in my environment such that CIDR detection is failing when trying to add monitors. Is there a way to specify the monitor IP address to bind to when adding a monitor if "public_network = 0.0.0.0/0"? Setting "public_network = 0.0.0.0/0" is the only way I could fin

[ceph-users] Cannot create issue in bugtracker

2021-04-30 Thread Tobias Urdin
Hello, Is it only me that's getting Internal error when trying to create issues in the bugtracker for some day or two? https://tracker.ceph.com/issues/new Best regards ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Performance questions - 4 node (commodity) cluster - what to expect (and what not ;-)

2021-04-30 Thread Mark Lehrer
Can you collect the output of this command on all 4 servers while your test is running: iostat -mtxy 1 This should show how busy the CPUs are as well as how busy each drive is. On Thu, Apr 29, 2021 at 7:52 AM Schmid, Michael wrote: > > Hello folks, > > I am new to ceph and at the moment I am d

[ceph-users] Re: Specify monitor IP when CIDR detection fails

2021-04-30 Thread Stephen Smith6
At the moment I'm using "ceph orch mon apply mon1,mon2,mon3" and hostnames "mon1,mon2,mon3" on all nodes resolve to the IP address I would like the monitor to bind to.   mon1 is the initial bootstrap monitor which is being created with "--mon-ip" (It in turn binds to the appropriate IP).   Is there

[ceph-users] Failed cephadm Upgrade - ValueError

2021-04-30 Thread Ashley Merrick
Hello All,I was running 15.2.8 via cephadm on docker Ubuntu 20.04I just attempted to upgrade to 16.2.1 via the automated method, it successfully upgraded the mon/mgr/mds and some OSD's, however it then failed on an OSD and hasn't been able to pass even after stopping and restarting the upgrade.I

[ceph-users] Re: Performance questions - 4 node (commodity) cluster - what to expect (and what not ;-)

2021-04-30 Thread Lindsay Mathieson
On 29/04/2021 11:52 pm, Schmid, Michael wrote: I am new to ceph and at the moment I am doing some performance tests with a 4 node ceph-cluster (pacific, 16.2.1). Ceph doesn't do well with small numbers, 4 OSD's is really marginal. Your latency isn't crash hot either. What size are you running

[ceph-users] Re: one of 3 monitors keeps going down

2021-04-30 Thread Eugen Block
Have you checked for disk failure? dmesg, smartctl etc. ? Zitat von "Robert W. Eckert" : I worked through that workflow- but it seems like the one monitor will run for a while - anywhere from an hour to a day, then just stop. This machine is running on AMD hardware (3600X CPU on X570 chipse

[ceph-users] Re: one of 3 monitors keeps going down

2021-04-30 Thread Robert W. Eckert
Nothing is appearing in dmesg. Smartctl shows no issues either. I did find this issue https://tracker.ceph.com/issues/24968 which showed something that may be memory related, so I will try testing that next. -Original Message- From: Eugen Block Sent: Friday, April 30, 2021 1:36 PM

[ceph-users] Best distro to run ceph.

2021-04-30 Thread Peter Childs
I'm trying to set up a new ceph cluster, and I've hit a bit of a blank. I started off with centos7 and cephadm. Worked fine to a point, except I had to upgrade podman but it mostly worked with octopus. Since this is a fresh cluster and hence no data at risk, I decided to jump straight into Pacifi

[ceph-users] Large OSD Performance: osd_op_num_shards, osd_op_num_threads_per_shard

2021-04-30 Thread Dave Hall
Hello, I noticed a couple unanswered questions on this topic from a while back. It seems, however, worth asking whether adjusting either or both of the subject attributes could improve performance with large HDD OSDs (mine are 12TB SAS). In the previous posts on this topic the writers indicated t

[ceph-users] Re: Best distro to run ceph.

2021-04-30 Thread Mark Lehrer
I've had good luck with the Ubuntu LTS releases - no need to add extra repos. 20.04 uses Octopus. On Fri, Apr 30, 2021 at 1:14 PM Peter Childs wrote: > > I'm trying to set up a new ceph cluster, and I've hit a bit of a blank. > > I started off with centos7 and cephadm. Worked fine to a point, ex