[ceph-users] Re: Ceph crash :-(

2024-06-13 Thread Robert Sander
would not use Ceph packages shipped from a distribution but always the ones from download.ceph.com or even better the container images that come with the orchestrator. Why version do your other Ceph nodes run on? Regards -- Robert Sander Heinlein Support GmbH Linux: Akademie - Support - Ho

[ceph-users] Re: Ceph crash :-(

2024-06-13 Thread Robert Sander
pgrade the Ceph packages. download.ceph.com has packages for Ubuntu 22.04 and nothing for 24.04. Therefor I would assume Ubuntu 24.04 is not a supported platform for Ceph (unless you use the cephadm orchestrator and container). BTW: Please keep the discussion on the mailing list. Regards -- Rob

[ceph-users] Re: Slow down RGW updates via orchestrator

2024-06-26 Thread Robert Sander
Hi, On 6/26/24 11:49, Boris wrote: Is there a way to only update 1 daemon at a time? You can use the feature "staggered upgrade": https://docs.ceph.com/en/reef/cephadm/upgrade/#staggered-upgrade Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Ber

[ceph-users] Re: cannot delete service by ceph orchestrator

2024-06-29 Thread Robert Sander
create any new OSDs. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Hein

[ceph-users] use of db_slots in DriveGroup specification?

2024-07-10 Thread Robert Sander
/thread/6EVOYOHS3BTTNLKBRGLPTZ76HPNLP6FC/#6EVOYOHS3BTTNLKBRGLPTZ76HPNLP6FC Shouldn't db_slots make that easier? Is this a bug in the orchestrator? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-1

[ceph-users] Re: Use of db_slots in DriveGroup specification?

2024-07-11 Thread Robert Sander
Hi, On 7/11/24 09:01, Eugen Block wrote: apparently, db_slots is still not implemented. I just tried it on a test cluster with 18.2.2: I am thinking about a PR to correct the documentation. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-12 Thread Robert Sander
uggest to use Ubuntu 22.04 LTS as the base operating system. You can use cephadm on top of that without issues. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-

[ceph-users] Re: Cephadm has a small wart

2024-07-19 Thread Robert Sander
sed on CentOS 8. When you execute "cephadm shell" it starts a container with that image for you. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Char

[ceph-users] Re: How to specify id on newly created OSD with Ceph Orchestrator

2024-07-22 Thread Robert Sander
On 7/23/24 08:24, Iztok Gregori wrote: Am I missing something obvious or with Ceph orchestrator there are non way to specify an id during the OSD creation? Why would you want to do that? A new OSD always gets the lowest available ID. Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Bluestore issue using 18.2.2

2024-08-05 Thread Robert Sander
Hi Marianne, is there anything in the kernel logs of the VMs and the hosts where the VMs are running with regard to the VM storage? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] Re: Pull failed on cluster upgrade

2024-08-05 Thread Robert Sander
On 05.08.24 18:38, Nicola Mori wrote: docker.io/snack14/ceph-wizard This is not an official container image. The images from the Ceph project are on quay.io/ceph/ceph. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel

[ceph-users] Re: Issue Replacing OSD with cephadm: Partition Path Not Accepted

2024-09-02 Thread Robert Sander
. IMHO you will have to redeploy the OSD to use LVM on the disk. It does not need to be the whole disk if there is other data on it. It should be sufficient to make /dev/sdb1 a PV of a new VG for the LV of the OSD. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: [RGW][cephadm] How to configure RGW as code and independantely of daemon names ?

2024-09-11 Thread Robert Sander
om/en/reef/rados/configuration/ceph-conf/#monitor-configuration-database Kindest Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsf

[ceph-users] Re: Ceph as a distributed filesystem and kerberos integration

2020-10-02 Thread Robert Sander
rsonate any User ID locally. The recommended way is to run a Samba cluster using CephFS as backend. Your users would then authenticate against Samba which would need to speak to your LDAP/Kerberos. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinl

[ceph-users] Re: CephFS user mapping

2020-10-06 Thread Robert Sander
to map that onto user name and group name. What you use for consistent mappings between your CephFS clients is up to you. It could be NIS, libnss-ldap, winbind (Active Directory) or any other method that keeps the passwd and group files in sync. Regards -- Robert Sander Heinlein Support GmbH Schwedt

[ceph-users] Re: Ubuntu 20 with octopus

2020-10-12 Thread Robert Sander
ed on one node, i.e. the distribution must support Docker or podman. cephadm sets up a containerized Ceph cluster with containers based on CentOS. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Re: Does it make sense to have separate HDD based DB/WAL partition

2020-11-03 Thread Robert Sander
t; partition? If you do not have faster devices for DB/WAL there is no need to create them. It does not make the OSD faster. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsge

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-11 Thread Robert Sander
encing most of the > ops and repair tasks for the first time here. My condolences. Get the data from that cluster and put the cluster down. In the current setup it will never work. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.d

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-11 Thread Robert Sander
t 7 to 10 nodes and a corresponding number of OSDs. This cluster is too small to do any amount of "real" work. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben l

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-16 Thread Robert Sander
rge number of nodes (more than 10) and a proportional number of OSDs. Mixed HDDs and SSDs in one pool is not good practice as a pool should have OSDs of the same speed. Kindest Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-16 Thread Robert Sander
Am 11.11.20 um 13:05 schrieb Hans van den Bogert: > And also the erasure coded profile, so an example on my cluster would be: > > k=2 > m=1 With this profile you can only loose one OSD at a time, which is really not that redundant. Regards -- Robert Sander Heinlein Support GmbH S

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-17 Thread Robert Sander
ot=default k=2 m=2 You need k+m=4 independent hosts for the EC parts, but your CRUSH map only shows two hosts. This is why all your PGs are undersized and degraded. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: Ceph on ARM ?

2020-11-24 Thread Robert Sander
com.tw/ Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz:

[ceph-users] Re: Clearing contents of OSDs without removing them?

2020-12-19 Thread Robert Sander
ls also removes the objects and you can start new. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschä

[ceph-users] bluefs_buffered_io=false performance regression

2021-01-11 Thread Robert Sander
0,88676 0,00338191 true rand 30,1007 82474194304 4194304 1095,92 273 25,5066 313 213 0,05719 0,99140 0,00325295 Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: bluefs_buffered_io=false performance regression

2021-01-11 Thread Robert Sander
Hi Marc and Dan, thanks for your quick responses assuring me that we did nothing totally wrong. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B

[ceph-users] Python API mon_comand()

2021-01-15 Thread Robert Sander
t;:"rbd","id":1,"stats":{"stored":27410520278,"objects":6781,"kb_used":80382849,"bytes_used":82312036566,"percent_used":0.1416085809469223,"max_avail":166317473792}},{"name":"cephfs_data",

[ceph-users] Re: Large rbd

2021-01-21 Thread Robert Sander
nked together using lvm or somesuch? What are the tradeoffs? IMHO there are no tradeoffs, there could even be benefits creating a volume group with multiple physical volumes on RBD as the requests can be bettere parallelized (i.e. virtio-single SCSI controller for qemu). Regards -- Robert San

[ceph-users] Re: Unable to use ceph command

2021-01-29 Thread Robert Sander
(error connecting to the cluster) This issue is mostly caused by not having a readable ceph.conf and ceph.client.admin.keyring file in /etc/ceph for the user that starts the ceph command. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-su

[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-04 Thread Robert Sander
Hi, Am 04.02.21 um 12:10 schrieb Frank Schilder: > Going to 2+2 EC will not really help On such a small cluster you cannot even use EC because there are not enough independent hosts. As a rule of thumb there should be k+m+1 hosts in a cluster AFAIK. Regards -- Robert Sander Heinlein Supp

[ceph-users] Re: firewall config for ceph fs client

2021-02-10 Thread Robert Sander
in the cluster. You need ports 3300 and 6789 for the MONs on their IPs and any dynamic port starting at 6800 used by the OSDs. The MDS also uses a port above 6800. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 4050

[ceph-users] Re: firewall config for ceph fs client

2021-02-10 Thread Robert Sander
Am 10.02.21 um 15:54 schrieb Frank Schilder: > Which ports are the clients using - if any? All clients only have outgoing connections and do not listen to any ports themselves. The Ceph cluster will not initiate a connection to the client. Kindest Regards -- Robert Sander Heinlein Support G

[ceph-users] Re: Ceph server

2021-03-12 Thread Robert Sander
0G bonded interfaces in the cluster network? I would assume that you would want to go at least 2x 25G here. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HR

[ceph-users] Re: Ceph server

2021-03-12 Thread Robert Sander
Am 10.03.21 um 20:44 schrieb Ignazio Cassano: > 1 small ssd is for operations system and 1 is for mon. Make that a RAID1 set of SSDs and be happier. ;) Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43

[ceph-users] Re: How big an OSD disk could be?

2021-03-12 Thread Robert Sander
Am 12.03.21 um 18:30 schrieb huxia...@horebdata.cn: > Any other aspects on the limits of bigger capacity hard disk drives? Recovery will take longer increasing the risk of another failure in the same time. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin h

[ceph-users] Re: lvm fix for reseated reseated device

2021-03-15 Thread Robert Sander
ready rebooted the box so I won't be able to > test immediately.) My experience with LVM is that only a reboot helps in this situation. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] OpenSSL security update for Octopus container?

2021-03-26 Thread Robert Sander
check docker.io/ceph/ceph:v15" but it tells me that the containers do not need to be upgraded. How will this security fix of OpenSSL be deployed in a timely manner to users of the Ceph container images? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://ww

[ceph-users] Re: Is metadata on SSD or bluestore cache better?

2021-04-05 Thread Robert Sander
B volumes and one OSD on each SSD. HDD only SSDs are quite slow. If you do not have enough SSDs for them go with an SSD only cephfs metadata pool. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Pacific unable to configure NFS-Ganesha

2021-04-05 Thread Robert Sander
pected condition which prevented it from fulfilling the request.", "request_id": "e89b8519-352f-4e44-a364-6e6faf9dc533"} '] I have no r

[ceph-users] Re: RGW failed to start after upgrade to pacific

2021-04-05 Thread Robert Sander
t; bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed > to start datalog_rados service ((5) Input/output error > bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed > to init services (ret=(5) Input/output error) I see the same issues on a

[ceph-users] Re: Pacific unable to configure NFS-Ganesha

2021-04-05 Thread Robert Sander
Hi, I forgot to mention that CephFS is enabled and working. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer

[ceph-users] Re: Problem using advanced OSD layout in octopus

2021-04-06 Thread Robert Sander
Hi, The DB device needs to be empty for an automatic OSD service. The service will then create N db slots using logical volumes and not partitions. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030

[ceph-users] Re: RGW failed to start after upgrade to pacific

2021-04-12 Thread Robert Sander
So when you have a Ceph cluster with Rados-Gateways you should not upgrade to Pacific currently. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 9381

[ceph-users] Re: cephadm custom mgr modules

2021-04-12 Thread Robert Sander
Hi, this is one of the use cases mentioned in Tim Serong's talk: https://youtu.be/pPZsN_urpqw Containers are great for deploying a fixed state of a software project (a release), but not so much for the development of plugins etc. Regards -- Robert Sander Heinlein Support GmbH Schwedte

[ceph-users] ceph orch upgrade fails when pulling container image

2021-04-21 Thread Robert Sander
Hi, # docker pull ceph/ceph:v16.2.1 Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit How do I update a Ceph cluster in this situation? Regards -- Robert

[ceph-users] Re: ceph orch upgrade fails when pulling container image

2021-04-21 Thread Robert Sander
Hi, Am 21.04.21 um 10:14 schrieb Robert Sander: > How do I update a Ceph cluster in this situation? I learned that I need to create an account on the website hub.docker.com to be able to download Ceph container images in the future. With the credentials I need to run "docker login&

[ceph-users] After upgrade to 15.2.11 no access to cluster any more

2021-04-22 Thread Robert Sander
ied (error connecting to the cluster) What should I do? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Char

[ceph-users] Re: After upgrade to 15.2.11 no access to cluster any more

2021-04-22 Thread Robert Sander
Am 22.04.21 um 09:07 schrieb Robert Sander: > What should I do? I should also upgrade the CLI client which still was at 15.2.8 (Ubuntu 20.04) because a "ceph orch upgrade" run only updates the software inside the containers. Regards -- Robert Sander Heinlein Consulting GmbH Schwed

[ceph-users] Download-Mirror eu.ceph.com misses Debian Release file

2021-04-22 Thread Robert Sander
Hi, to whomever it may concern: The mirror server eu.ceph.com does to carry the Release files for 15.2.11 in https://eu.ceph.com/debian-15.2.11/dists/*/ and 16.2.1 in https://eu.ceph.com/debian-16.2.1/dists/*/ Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
h map. It looks like the OSD is the failure zone, and not the host. If it woould be the host the failure of any number of OSDs in a single host would not bring PGs down. For the default redundancy rule and pool size 3 you need three separate hosts. Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
the mds suffer when only 4% of the osd goes > down (in the same node). I need to modify the crush map? With an unmodified crush map and the default placement rule this should not happen. Can you please show the output of "ceph osd crush rule dump"? Regards -- Robert Sander Hein

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
ill lead to data loss or at least intermediate unavailability. The situation is now that all copies (resp. EC chunks) for a PG are stored on OSDs of the same host. These PGs will be unavailable if the host is down. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10

[ceph-users] Re: orch upgrade mgr starts too slow and is terminated?

2021-05-06 Thread Robert Sander
Am 06.05.21 um 17:18 schrieb Sage Weil: > I hit the same issue. This was a bug in 16.2.0 that wasn't completely > fixed, but I think we have it this time. Kicking of a 16.2.3 build > now to resolve the problem. Great. I also hit that today. Thanks for fixing it quickly. Rega

[ceph-users] Re: orch upgrade mgr starts too slow and is terminated?

2021-05-07 Thread Robert Sander
I had success with stopping the "looping" mgr container via "systemctl stop" on the node. Cephadm then switches to another MGR to continue the upgrade. After that I just started the stopped mgr container and the upgrade continued. Regards -- Robert Sander Heinlein Consulting GmbH S

[ceph-users] Re: Failover with 2 nodes

2021-06-15 Thread Robert Sander
On 15.06.21 15:16, nORKy wrote: > Why is there no failover ?? Because only one MON out of two is not in the majority to build a quorum. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Robert Sander
could theoretically RAID0 multiple disks and then put an OSD on top of that but this would create very large OSDs which are not good for recovering data. Recovering such a "beast" just would take too long. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http

[ceph-users] Re: pacific installation at ubuntu 20.04

2021-06-24 Thread Robert Sander
ssing between these two steps. The first creates /etc/apt/sources.list.d/ceph.list and the second installs packages, but the repo list was never updated. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 0

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-26 Thread Robert Sander
lding and hosting for open source projects is solved with the openSUSE build service: https://build.opensuse.org/ But I think what Sage meant was e.g. different versions of GCC on the distributions and not being able to use all the latest features needed for compiling Ceph. Regards -- Robe

[ceph-users] Unhandled exception from module 'devicehealth' while running on mgr.al111: 'NoneType' object has no attribute 'get'

2021-06-30 Thread Robert Sander
30 16:07:09 al111 bash[171790]: File "/usr/share/ceph/mgr/devicehealth/module.py", line 33, in get_ata_wear_level Jun 30 16:07:09 al111 bash[171790]: if page.get("number") != 7: Jun 30 16:07:09 al111 bash[171790]: AttributeError: 'NoneType' object has no attribute '

[ceph-users] RocksDB resharding does not work

2021-07-08 Thread Robert Sander
8 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.825+ 7efc32db4080 -1 ** ERROR: osd init failed: (5) Input/output error How do I correct the issue? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405

[ceph-users] Re: Size of cluster

2021-08-09 Thread Robert Sander
have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x 18TB (72TB) the maximum usable capacity will not be the sum of all disks. Remember that Ceph tries to evenly distribute the data. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein

[ceph-users] Re: Ceph Pacific mon is not starting after host reboot

2021-08-10 Thread Robert Sander
daemons (outside of osds I believe) from offline hosts. Sorry for maybe being rude but how on earth does one come up with the idea to automatically remove components from a cluster where just one node is currently rebooting without any operator interference? Regards -- Robert Sander Heinlein

[ceph-users] Re: How to safely turn off a ceph cluster

2021-08-11 Thread Robert Sander
h cluster? ceph osd set noout and after the cluster has been booted again and every OSD joined: ceph osd unset noout Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charl

[ceph-users] Re: A simple erasure-coding question about redundance

2021-08-27 Thread Robert Sander
heavy. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: B

[ceph-users] Re: Performance optimization

2021-09-06 Thread Robert Sander
of block devices with the same size distribution in each node you will get an even data distribution. If you have a node with 4 3TB drives and one with 4 6TB drives Ceph cannot use the 6TB drives efficiently. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: Performance optimization

2021-09-06 Thread Robert Sander
w the data distribution among the OSDs. Are all of these HDDs? Are these HDDs equipped with RocksDB on SSD? HDD only will have abysmal performance. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Re: Performance optimization

2021-09-07 Thread Robert Sander
ll be faster, to write it to just one ssd, instead of writing it to the disk directly. Usually one SSD carries the WAL and RocksDB of four to five HDD-OSDs. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: SSDs/HDDs in ceph Octopus

2021-09-10 Thread Robert Sander
Pools should have a uniform class of storage. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsf

[ceph-users] Re: Ignore Ethernet interface

2021-09-13 Thread Robert Sander
this. The Linux kernel will happily answer ARP requests on any interface for the IPs it has configured anywhere. That means you have a constant ARP flapping in your network. Make the three interfaces bonded and configure all three IPs on the bonded interface. Regards -- Robert Sander Heinlein

[ceph-users] Re: Ignore Ethernet interface

2021-09-14 Thread Robert Sander
work as the same IP subnet cannot span multiple broadcast domains. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
g Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin __

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
Hi, I had to run ceph fs set cephfs max_mds 1 ceph fs set cephfs allow_standby_replay false and stop all MDS and NFS containers and start one after the other again to clear this issue. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
I just run ceph orch upgrade start Why does the orchestrator not run the necessary steps? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB

[ceph-users] Re: Cluster downtime due to unsynchronized clocks

2021-09-23 Thread Robert Sander
use chrony or ntpd. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz

[ceph-users] Re: How you loadbalance your rgw endpoints?

2021-09-27 Thread Robert Sander
s with the number of clients (kubernetes nodes) Nice hack. But why not establish a DNS name that points to 127.0.0.1? Why the hassle with iptables? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Re: How to deal with increasing HDD sizes ? 1 OSD for 2 LVM-packed HDDs ?

2023-10-18 Thread Robert Sander
reasons. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: Emergency, I lost 4 monitors but all osd disk are safe

2023-11-02 Thread Robert Sander
-store-failures Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: Emergency, I lost 4 monitors but all osd disk are safe

2023-11-02 Thread Robert Sander
/latest/man/8/monmaptool/ https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovering-a-monitor-s-broken-monmap This way the remaining MON will be the only one in the map and will have quorum and the cluster will work again. Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Emergency, I lost 4 monitors but all osd disk are safe

2023-11-02 Thread Robert Sander
object is stored on the OSD data partition and without it nobody knows where each object is. The data is lost. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin

[ceph-users] Re: ceph storage pool error

2023-11-08 Thread Robert Sander
pool? Can you help me?1 and 2 clusters are working. I want to view my data from them and then transfer them to another place. How can I do this? I have never used Ceph before. Please send the output of: ceph -s ceph health detail ceph osd df tree Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Where is a simple getting started guide for a very basic cluster?

2023-11-26 Thread Robert Sander
NTP) - LVM2 for provisioning storage devices Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz

[ceph-users] Re: Where is a simple getting started guide for a very basic cluster?

2023-11-28 Thread Robert Sander
o the list. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: B

[ceph-users] Re: ceph osd dump_historic_ops

2023-12-01 Thread Robert Sander
$FSID is the UUID of the Ceph cluster, $OSDID is the OSD id. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Pee

[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread Robert Sander
r that you risk to lose data. Erasure coding is possible with a cluster size of 10 nodes or more. With smaller clusters you have to go with replicated pools. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread Robert Sander
On 12/5/23 10:06, duluxoz wrote: I'm confused - doesn't k4 m2 mean that you can loose any 2 out of the 6 osds? Yes, but OSDs are not a good failure zone. The host is the smallest failure zone that is practicable and safe against data loss. Regards -- Robert Sander Heinlein Consu

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-21 Thread Robert Sander
... CentOS thing... what distro appears to be the most straightforward to use with Ceph? I was going to try and deploy it on Rocky 9. Any distribution with a recent systemd, podman, LVM2 and time synchronization is viable. I prefer Debian, others RPM-based distributions. Regards -- Robert Sander

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-21 Thread Robert Sander
. Everything needed for the Ceph containers is provided by podman. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-21 Thread Robert Sander
Hi, On 21.12.23 15:13, Nico Schottelius wrote: I would strongly recommend k8s+rook for new clusters, also allows running Alpine Linux as the host OS. Why would I want to learn Kubernetes before I can deploy a new Ceph cluster when I have no need for K8s at all? Regards -- Robert Sander

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
On 21.12.23 22:27, Anthony D'Atri wrote: It's been claimed to me that almost nobody uses podman in production, but I have no empirical data. I even converted clusters from Docker to podman while they stayed online thanks to "ceph orch redeploy". Regards -- Ro

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
Hi, On 22.12.23 11:41, Albert Shih wrote: for n in 1-100 Put off line osd on server n Uninstall docker on server n Install podman on server n redeploy on server n end Yep, that's basically the procedure. But first try it on a test cluster. Regards -- Robert Sander Hei

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
-- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: Berlin

[ceph-users] Re: Ceph Docs: active releases outdated

2024-01-03 Thread Robert Sander
Hi Eugen, the release info is current only in the latest branch of the documentation: https://docs.ceph.com/en/latest/releases/ Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] Re: cephadm bootstrap on 3 network clusters

2024-01-03 Thread Robert Sander
. It is used to determine the public network. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz

[ceph-users] Re: cephadm bootstrap on 3 network clusters

2024-01-03 Thread Robert Sander
with the cluster network. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Cephadm orchestrator and special label _admin in 17.2.7

2024-01-18 Thread Robert Sander
. Both files are placed into the /var/lib/ceph//config directory. Has something changed? ¹: https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030

[ceph-users] Re: Cephadm orchestrator and special label _admin in 17.2.7

2024-01-18 Thread Robert Sander
] Updating cephtest23:/etc/ceph/ceph.client.admin.keyring 2024-01-18T11:47:08.212303+0100 mgr.cephtest32.ybltym [INF] Updating cephtest23:/var/lib/ceph/ba37db20-2b13-11eb-b8a9-871ba11409f6/config/ceph.client.admin.keyring Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Ber

[ceph-users] Re: Cephadm orchestrator and special label _admin in 17.2.7

2024-01-18 Thread Robert Sander
was at "*", so all hosts. I have set that to "label:_admin". It still does not put ceph.conf into /etc/ceph when adding the label _admin. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43

[ceph-users] Re: Cephadm orchestrator and special label _admin in 17.2.7

2024-01-19 Thread Robert Sander
not update /etc/ceph/ceph.conf. Only when I again do "ceph mgr fail" the new MGR will update /etc/ceph/ceph.conf on the hosts labeled with _admin. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43

[ceph-users] Re: How many pool for cephfs

2024-01-24 Thread Robert Sander
question, should I have a designated pool for S3 storage or can/should I use the same cephfs_data_replicated/erasure pool ? No, S3 needs its own pools. It cannot re-use CephFS pools. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de

[ceph-users] Re: How many pool for cephfs

2024-01-24 Thread Robert Sander
you intend to use the SSDs for the OSDs' RocksDB? Where do you plan to store the metadata pools for CephFS? They should be stored on fats media. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax:

  1   2   3   4   >