would not use Ceph packages shipped from a distribution but always the
ones from download.ceph.com or even better the container images that
come with the orchestrator.
Why version do your other Ceph nodes run on?
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Ho
pgrade the Ceph
packages.
download.ceph.com has packages for Ubuntu 22.04 and nothing for 24.04.
Therefor I would assume Ubuntu 24.04 is not a supported platform for
Ceph (unless you use the cephadm orchestrator and container).
BTW: Please keep the discussion on the mailing list.
Regards
--
Rob
Hi,
On 6/26/24 11:49, Boris wrote:
Is there a way to only update 1 daemon at a time?
You can use the feature "staggered upgrade":
https://docs.ceph.com/en/reef/cephadm/upgrade/#staggered-upgrade
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Ber
create any new OSDs.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Hein
/thread/6EVOYOHS3BTTNLKBRGLPTZ76HPNLP6FC/#6EVOYOHS3BTTNLKBRGLPTZ76HPNLP6FC
Shouldn't db_slots make that easier?
Is this a bug in the orchestrator?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-1
Hi,
On 7/11/24 09:01, Eugen Block wrote:
apparently, db_slots is still not implemented. I just tried it on a test
cluster with 18.2.2:
I am thinking about a PR to correct the documentation.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https
uggest to use Ubuntu 22.04 LTS as the base operating system.
You can use cephadm on top of that without issues.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-
sed on CentOS 8.
When you execute "cephadm shell" it starts a container with that image
for you.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Char
On 7/23/24 08:24, Iztok Gregori wrote:
Am I missing something obvious or with Ceph orchestrator there are non
way to specify an id during the OSD creation?
Why would you want to do that?
A new OSD always gets the lowest available ID.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Hi Marianne,
is there anything in the kernel logs of the VMs and the hosts where the
VMs are running with regard to the VM storage?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
On 05.08.24 18:38, Nicola Mori wrote:
docker.io/snack14/ceph-wizard
This is not an official container image.
The images from the Ceph project are on quay.io/ceph/ceph.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel
.
IMHO you will have to redeploy the OSD to use LVM on the disk. It does
not need to be the whole disk if there is other data on it. It should be
sufficient to make /dev/sdb1 a PV of a new VG for the LV of the OSD.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
om/en/reef/rados/configuration/ceph-conf/#monitor-configuration-database
Kindest Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsf
rsonate any User ID locally.
The recommended way is to run a Samba cluster using CephFS as backend.
Your users would then authenticate against Samba which would need to
speak to your LDAP/Kerberos.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinl
to map that onto user name and
group name.
What you use for consistent mappings between your CephFS clients is up
to you. It could be NIS, libnss-ldap, winbind (Active Directory) or any
other method that keeps the passwd and group files in sync.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedt
ed on one node, i.e. the distribution must support
Docker or podman.
cephadm sets up a containerized Ceph cluster with containers based on
CentOS.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
t; partition?
If you do not have faster devices for DB/WAL there is no need to create
them. It does not make the OSD faster.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsge
encing most of the
> ops and repair tasks for the first time here.
My condolences. Get the data from that cluster and put the cluster down.
In the current setup it will never work.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.d
t 7 to 10 nodes and a
corresponding number of OSDs.
This cluster is too small to do any amount of "real" work.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben l
rge
number of nodes (more than 10) and a proportional number of OSDs.
Mixed HDDs and SSDs in one pool is not good practice as a pool should
have OSDs of the same speed.
Kindest Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030
Am 11.11.20 um 13:05 schrieb Hans van den Bogert:
> And also the erasure coded profile, so an example on my cluster would be:
>
> k=2
> m=1
With this profile you can only loose one OSD at a time, which is really
not that redundant.
Regards
--
Robert Sander
Heinlein Support GmbH
S
ot=default
k=2
m=2
You need k+m=4 independent hosts for the EC parts, but your CRUSH map
only shows two hosts. This is why all your PGs are undersized and degraded.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-4
com.tw/
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz:
ls also
removes the objects and you can start new.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschä
0,88676 0,00338191
true rand 30,1007 82474194304 4194304 1095,92
273 25,5066 313 213 0,05719 0,99140 0,00325295
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-4
Hi Marc and Dan,
thanks for your quick responses assuring me that we did nothing totally
wrong.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B
t;:"rbd","id":1,"stats":{"stored":27410520278,"objects":6781,"kb_used":80382849,"bytes_used":82312036566,"percent_used":0.1416085809469223,"max_avail":166317473792}},{"name":"cephfs_data",
nked together using lvm or somesuch? What are the tradeoffs?
IMHO there are no tradeoffs, there could even be benefits creating a
volume group with multiple physical volumes on RBD as the requests can
be bettere parallelized (i.e. virtio-single SCSI controller for qemu).
Regards
--
Robert San
(error connecting to the cluster)
This issue is mostly caused by not having a readable ceph.conf and
ceph.client.admin.keyring file in /etc/ceph for the user that starts the
ceph command.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-su
Hi,
Am 04.02.21 um 12:10 schrieb Frank Schilder:
> Going to 2+2 EC will not really help
On such a small cluster you cannot even use EC because there are not
enough independent hosts. As a rule of thumb there should be k+m+1 hosts
in a cluster AFAIK.
Regards
--
Robert Sander
Heinlein Supp
in the cluster.
You need ports 3300 and 6789 for the MONs on their IPs and any dynamic
port starting at 6800 used by the OSDs. The MDS also uses a port above 6800.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 4050
Am 10.02.21 um 15:54 schrieb Frank Schilder:
> Which ports are the clients using - if any?
All clients only have outgoing connections and do not listen to any
ports themselves.
The Ceph cluster will not initiate a connection to the client.
Kindest Regards
--
Robert Sander
Heinlein Support G
0G
bonded interfaces in the cluster network? I would assume that you would
want to go at least 2x 25G here.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HR
Am 10.03.21 um 20:44 schrieb Ignazio Cassano:
> 1 small ssd is for operations system and 1 is for mon.
Make that a RAID1 set of SSDs and be happier. ;)
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Am 12.03.21 um 18:30 schrieb huxia...@horebdata.cn:
> Any other aspects on the limits of bigger capacity hard disk drives?
Recovery will take longer increasing the risk of another failure in the
same time.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
h
ready rebooted the box so I won't be able to
> test immediately.)
My experience with LVM is that only a reboot helps in this situation.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
check docker.io/ceph/ceph:v15" but it
tells me that the containers do not need to be upgraded.
How will this security fix of OpenSSL be deployed in a timely manner to
users of the Ceph container images?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://ww
B
volumes and one OSD on each SSD.
HDD only SSDs are quite slow. If you do not have enough SSDs for them go
with an SSD only cephfs metadata pool.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
pected condition which
prevented it from fulfilling the request.", "request_id":
"e89b8519-352f-4e44-a364-6e6faf9dc533"}
']
I have no r
t; bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed
> to start datalog_rados service ((5) Input/output error
> bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed
> to init services (ret=(5) Input/output error)
I see the same issues on a
Hi,
I forgot to mention that CephFS is enabled and working.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer
Hi,
The DB device needs to be empty for an automatic OSD service. The service will
then create N db slots using logical volumes and not partitions.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030
So when you have a Ceph cluster with Rados-Gateways you should not
upgrade to Pacific currently.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 9381
Hi,
this is one of the use cases mentioned in Tim Serong's talk:
https://youtu.be/pPZsN_urpqw
Containers are great for deploying a fixed state of a software project (a
release), but not so much for the development of plugins etc.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedte
Hi,
# docker pull ceph/ceph:v16.2.1
Error response from daemon: toomanyrequests: You have reached your pull
rate limit. You may increase the limit by authenticating and upgrading:
https://www.docker.com/increase-rate-limit
How do I update a Ceph cluster in this situation?
Regards
--
Robert
Hi,
Am 21.04.21 um 10:14 schrieb Robert Sander:
> How do I update a Ceph cluster in this situation?
I learned that I need to create an account on the website hub.docker.com
to be able to download Ceph container images in the future.
With the credentials I need to run "docker login&
ied (error connecting to the cluster)
What should I do?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Char
Am 22.04.21 um 09:07 schrieb Robert Sander:
> What should I do?
I should also upgrade the CLI client which still was at 15.2.8 (Ubuntu
20.04) because a "ceph orch upgrade" run only updates the software
inside the containers.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwed
Hi,
to whomever it may concern:
The mirror server eu.ceph.com does to carry the Release files for
15.2.11 in https://eu.ceph.com/debian-15.2.11/dists/*/ and 16.2.1 in
https://eu.ceph.com/debian-16.2.1/dists/*/
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
h map. It looks like the
OSD is the failure zone, and not the host. If it woould be the host the
failure of any number of OSDs in a single host would not bring PGs down.
For the default redundancy rule and pool size 3 you need three separate
hosts.
Regards
--
Robert Sander
Heinlein Consulting GmbH
the mds suffer when only 4% of the osd goes
> down (in the same node). I need to modify the crush map?
With an unmodified crush map and the default placement rule this should
not happen.
Can you please show the output of "ceph osd crush rule dump"?
Regards
--
Robert Sander
Hein
ill lead to data loss or at least intermediate
unavailability.
The situation is now that all copies (resp. EC chunks) for a PG are
stored on OSDs of the same host. These PGs will be unavailable if the
host is down.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10
Am 06.05.21 um 17:18 schrieb Sage Weil:
> I hit the same issue. This was a bug in 16.2.0 that wasn't completely
> fixed, but I think we have it this time. Kicking of a 16.2.3 build
> now to resolve the problem.
Great. I also hit that today. Thanks for fixing it quickly.
Rega
I had success with stopping the "looping" mgr container via "systemctl
stop" on the node. Cephadm then switches to another MGR to continue the
upgrade. After that I just started the stopped mgr container and the
upgrade continued.
Regards
--
Robert Sander
Heinlein Consulting GmbH
S
On 15.06.21 15:16, nORKy wrote:
> Why is there no failover ??
Because only one MON out of two is not in the majority to build a quorum.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051
could theoretically RAID0 multiple disks and then put an OSD on top
of that but this would create very large OSDs which are not good for
recovering data. Recovering such a "beast" just would take too long.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http
ssing between these two steps.
The first creates /etc/apt/sources.list.d/ceph.list and the second
installs packages, but the repo list was never updated.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 0
lding and hosting for open source projects
is solved with the openSUSE build service:
https://build.opensuse.org/
But I think what Sage meant was e.g. different versions of GCC on the
distributions and not being able to use all the latest features needed
for compiling Ceph.
Regards
--
Robe
30 16:07:09 al111 bash[171790]: File
"/usr/share/ceph/mgr/devicehealth/module.py", line 33, in get_ata_wear_level
Jun 30 16:07:09 al111 bash[171790]: if page.get("number") != 7:
Jun 30 16:07:09 al111 bash[171790]: AttributeError: 'NoneType' object has no
attribute '
8 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.825+
7efc32db4080 -1 ** ERROR: osd init failed: (5) Input/output error
How do I correct the issue?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405
have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x
18TB (72TB) the maximum usable capacity will not be the sum of all
disks. Remember that Ceph tries to evenly distribute the data.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein
daemons (outside of osds I believe) from offline hosts.
Sorry for maybe being rude but how on earth does one come up with the
idea to automatically remove components from a cluster where just one
node is currently rebooting without any operator interference?
Regards
--
Robert Sander
Heinlein
h cluster?
ceph osd set noout
and after the cluster has been booted again and every OSD joined:
ceph osd unset noout
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charl
heavy.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: B
of block devices with the same size
distribution in each node you will get an even data distribution.
If you have a node with 4 3TB drives and one with 4 6TB drives Ceph
cannot use the 6TB drives efficiently.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
w the data distribution among the OSDs.
Are all of these HDDs? Are these HDDs equipped with RocksDB on SSD?
HDD only will have abysmal performance.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
ll be faster, to write it to just one ssd, instead of
writing it to the disk directly.
Usually one SSD carries the WAL and RocksDB of four to five HDD-OSDs.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-4
Pools should have a uniform class of storage.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsf
this. The Linux kernel will happily answer ARP requests on any
interface for the IPs it has configured anywhere. That means you have a
constant ARP flapping in your network.
Make the three interfaces bonded and configure all three IPs on the
bonded interface.
Regards
--
Robert Sander
Heinlein
work as the same IP subnet cannot span multiple
broadcast domains.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin
g
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
__
Hi,
I had to run
ceph fs set cephfs max_mds 1
ceph fs set cephfs allow_standby_replay false
and stop all MDS and NFS containers and start one after the other again
to clear this issue.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein
I just run
ceph orch upgrade start
Why does the orchestrator not run the necessary steps?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB
use chrony or ntpd.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz
s with the number of clients
(kubernetes nodes)
Nice hack. But why not establish a DNS name that points to 127.0.0.1?
Why the hassle with iptables?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
reasons.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
-store-failures
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
/latest/man/8/monmaptool/
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovering-a-monitor-s-broken-monmap
This way the remaining MON will be the only one in the map and will have
quorum and the cluster will work again.
Regards
--
Robert Sander
Heinlein Consulting GmbH
object is stored on the
OSD data partition and without it nobody knows where each object is. The
data is lost.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin
pool? Can you help me?1 and 2 clusters are working. I want to view my data
from them and then transfer them to another place. How can I do this? I have
never used Ceph before.
Please send the output of:
ceph -s
ceph health detail
ceph osd df tree
Regards
--
Robert Sander
Heinlein Consulting GmbH
NTP)
- LVM2 for provisioning storage devices
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz
o the list.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: B
$FSID is the UUID of the Ceph cluster, $OSDID is the OSD id.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Pee
r that you risk to lose data.
Erasure coding is possible with a cluster size of 10 nodes or more.
With smaller clusters you have to go with replicated pools.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-4
On 12/5/23 10:06, duluxoz wrote:
I'm confused - doesn't k4 m2 mean that you can loose any 2 out of the 6
osds?
Yes, but OSDs are not a good failure zone.
The host is the smallest failure zone that is practicable and safe
against data loss.
Regards
--
Robert Sander
Heinlein Consu
... CentOS thing... what distro appears to be the most
straightforward to use with Ceph? I was going to try and deploy it on Rocky 9.
Any distribution with a recent systemd, podman, LVM2 and time
synchronization is viable. I prefer Debian, others RPM-based distributions.
Regards
--
Robert Sander
.
Everything needed for the Ceph containers is provided by podman.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg
Hi,
On 21.12.23 15:13, Nico Schottelius wrote:
I would strongly recommend k8s+rook for new clusters, also allows
running Alpine Linux as the host OS.
Why would I want to learn Kubernetes before I can deploy a new Ceph
cluster when I have no need for K8s at all?
Regards
--
Robert Sander
On 21.12.23 22:27, Anthony D'Atri wrote:
It's been claimed to me that almost nobody uses podman in production, but I
have no empirical data.
I even converted clusters from Docker to podman while they stayed online
thanks to "ceph orch redeploy".
Regards
--
Ro
Hi,
On 22.12.23 11:41, Albert Shih wrote:
for n in 1-100
Put off line osd on server n
Uninstall docker on server n
Install podman on server n
redeploy on server n
end
Yep, that's basically the procedure.
But first try it on a test cluster.
Regards
--
Robert Sander
Hei
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
Hi Eugen,
the release info is current only in the latest branch of the
documentation: https://docs.ceph.com/en/latest/releases/
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
. It is used to determine the public
network.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz
with the cluster network.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
.
Both files are placed into the /var/lib/ceph//config directory.
Has something changed?
¹:
https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030
] Updating
cephtest23:/etc/ceph/ceph.client.admin.keyring
2024-01-18T11:47:08.212303+0100 mgr.cephtest32.ybltym [INF] Updating
cephtest23:/var/lib/ceph/ba37db20-2b13-11eb-b8a9-871ba11409f6/config/ceph.client.admin.keyring
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Ber
was at "*",
so all hosts. I have set that to "label:_admin".
It still does not put ceph.conf into /etc/ceph when adding the label _admin.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
not update
/etc/ceph/ceph.conf.
Only when I again do "ceph mgr fail" the new MGR will update
/etc/ceph/ceph.conf on the hosts labeled with _admin.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
question, should I have a
designated pool for S3 storage or can/should I use the same
cephfs_data_replicated/erasure pool ?
No, S3 needs its own pools. It cannot re-use CephFS pools.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
you intend to use the SSDs for the OSDs' RocksDB?
Where do you plan to store the metadata pools for CephFS? They should be
stored on fats media.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax:
1 - 100 of 324 matches
Mail list logo