Hello everyone,
I would like to setup my CephFS with different directories exclusively
accessible by corresponding clients. By this, I mean e.g. /dir_a only
accessible by client.a and /dir_b only by client.b.
From the documentation I gathered, having client caps like
client.a
key:
c
ve an idea what might be the issue here?
Best regards,
Jonas
PS: A happy new year to everyone!
On 23.12.22 10:05, Kai Stian Olstad wrote:
On 22.12.2022 15:47, Jonas Schwab wrote:
Now the question: Since I established this setup more or less through
trial and error, I was wondering if there
Dear everyone,
I have several questions regarding CephFS connected to Namespaces,
Subvolumes and snapshot Mirroring:
*1. How to display/create namespaces used for isolating subvolumes?*
I have created multiple subvolumes with the option
--namespace-isolated, so I was expecting to see the
Dear everyone,
I have several questions regarding CephFS connected to Namespaces,
Subvolumes and snapshot Mirroring:
*1. How to display/create namespaces used for isolating subvolumes?*
I have created multiple subvolumes with the option
--namespace-isolated, so I was expecting to see the
No, didn't issue any commands to the OSDs.
On 2025-04-10 17:28, Eugen Block wrote:
Did you stop the OSDs?
Zitat von Jonas Schwab :
Thank you very much! I now stated the first step, namely "Collect the
map from each OSD host". As I have a cephadm deployment, I will have
Thank you for the help! Does that mean stopping the container and
mounting the lv?
On 2025-04-10 17:38, Eugen Block wrote:
You have to stop the OSDs in order to mount them with the objectstore
tool.
Zitat von Jonas Schwab :
No, didn't issue any commands to the OSDs.
On 2025-04-10
Hello everyone,
I believe I accidentally nuked all monitor of my cluster (please don't
ask how). Is there a way to recover from this desaster? I have a cephadm
setup.
I am very grateful for all help!
Best regards,
Jonas Schwab
___
ceph-users ma
n't really know
where they come from, tbh.
Can you confirm that those are actually OSD processes filling up the RAM?
Zitat von Jonas Schwab :
Hello everyone,
I recently have many problems with OSDs using much more memory than they
are supposed to (> 10GB), leading to the node running
40d
Best regards,
Jonas
--
Jonas Schwab
Research Data Management, Cluster of Excellence ct.qmat
https://data.ctqmat.de | datamanagement.ct.q...@listserv.dfn.de
Email: jonas.sch...@uni-wuerzburg.de
Tel: +49 931 31-84460
___
ceph-users mailing list -- c
a quorum before; must have been removed
2025-04-10T19:32:32.174+0200 7fec628c5e00 -1 mon.mon.ceph2-01@-1(???)
e29 commit suicide!
2025-04-10T19:32:32.174+0200 7fec628c5e00 -1 failed to initialize
On 2025-04-10 16:01, Jonas Schwab wrote:
Hello everyone,
I believe I accidentally nuked all monitor
porarily
unavailable'". Does anybody know how to solve this?
Best regards,
Jonas
On 2025-04-10 16:04, Robert Sander wrote:
Hi Jonas,
Am 4/10/25 um 16:01 schrieb Jonas Schwab:
I believe I accidentally nuked all monitor of my cluster (please don't
ask how). Is there a way to recover f
--name mon.
ceph-monstore-tool /var/lib/ceph/mon/ get monmap -- --out
monmap
monmaptool --print monmap
You can paste the output here, if you want.
Zitat von Jonas Schwab :
I realized, I have access to a data directory of a monitor I removed
just before the oopsie happened. Can I launch a ceph-mon
Just to be safe, back up all the store.db directories.
Then modify a monmap to contain the one you want to revive by removing
the other ones. Backup your monmap files as well. Then inject the
modified monmap into the daemon and try starting it.
Zitat von Jonas Schwab :
Again, thank you very muc
h-ceph2-01/store.db/LOCK: Permission denied`
Even though I double checked that the permission and ownership on the
replacing store.db are properly set.
On 2025-04-10 22:45, Jonas Schwab wrote:
I edited the monmap to include only rgw2-06 and then followed
https://docs.ceph.com/en/squid/rados/operati
Yes mgrs are running as intended. It just seems that mons and osd don't
recongnize each other, because the monitors map is outdated.
On 2025-04-11 07:07, Eugen Block wrote:
Is at least one mgr running? PG states are reported by the mgr daemon.
Zitat von Jonas Schwab :
I solved the pr
15 matches
Mail list logo