return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/status/module.py", line 337, in
handle_osd_status
assert metadata
AssertionError
I suppose this is some type of bug when one host is down?
Thanks!
Marcus
_
Thanks for your answers!
I read somewhere that a vpn would really have an impact on performance,
so it was not recommended, and I found v2 protocol.
But vpn feels like the solution and you have to accept the lower speed.
Thanks again!
On tis, maj 21 2024 at 17:07:48 +1000, Malcolm Haak
wrote
e specific subnet will be able to connect and mount cephfs.
From my understanding from the documenation this would be the
way to set this up with ceph exposed to internet.
Is there something that we are missing or something that would
make the setup more secure?
Many thanks in adva
help is appreciated.
Many thanks in advance!
Best regards
Marcus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
some other type of snapshot?
Many thanks!!
Marcus
On lör, mar 16 2024 at 23:51:09 +0530, Neeraj Pratap Singh
wrote:
As per the error message you mentioned;
Permission denied : It seems that the 'subvolume' flag has been set
on the
root directory and we cannot create snapshots in direct
creation though.
Thanks for you help!!
On lör, mar 16 2024 at 00:53:22 +0530, Neeraj Pratap Singh
wrote:
Can u pls do getfattr on root directory and tell whats the output?
Run this command: getfattr -n ceph.dir.subvolume /mnt
On Thu, Mar 14, 2024 at 4:38 PM Marcus <mailto:mar...@marcux.
fs subvolume snapshot ...
You get a new directory (volumes) in the root:
/mnt/volumes/_legacy/cd76f96956469e7be39d750cc7d9.meta
I do not know if I am missing something, some lacking of
config or so.
Thanks for your help!!
Best regards
Marcus
__
ost of the time if a node fails you can replace a DIMM etc. and
bring it back.
Many thanks!!
Regards
Marcus
On fre, dec 22 2023 at 19:12:19 -0500, Anthony D'Atri
mailto:a...@dreamsnake.net>> wrote:
You can do that for a PoC, but that's a bad idea for any
production w
ll objects will be there after replication.
No lost data?
Many thanks!!
Regards
Marcus
On fre, dec 22 2023 at 19:12:19 -0500, Anthony D'Atri
wrote:
You can do that for a PoC, but that's a bad idea for any
production workload. You'd want at least three nodes with OSDs
to create a snapshot from the
client with:
mkdir /mnt-ceph/.snap/my_snapshot
I get the same error in all directories:
Permission dened.
I have not found any sollution to this,
am I missing something here as well?
Any config missing?
Many thanks for your support!!
Hi Ilya,
thanks. This looks like as a general setting for all RBD images, not for some
specific, right?
Is a more specific definition possible, so you can have multiple rbd images in
different ceph namespaces?
Regards
Marcus
> Am 21.11.2022 um 22:22 schrieb Ilya Dryomov :
>
> On
?
Thanks & regards
Marcus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
What is the error here ?
Regards
Marcus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thank you! We are running Pacific, that was my issue here.
Can someone share a example of a full API request and answer with curl? I’m
still having issues, now getting 401 or 403 answers (but providing Auth-User
and Auth-Key).
Regards
Marcus
> Am 15.07.2022 um 15:23 schrieb Casey Bod
"type": "usage",
"perm": "*"
},
{
"type": "user-policy",
"perm": "*"
},
{
"type": "users",
"perm&
$ ceph daemon mon.ceph4 config get osd_scrub_auto_repair
{
"osd_scrub_auto_repair": "true"
}
What does this tell me know? Setting can be changed to false of course, but as
list-inconsistent-obj shows something, I would like to find the reason for that
first.
Regards
Marcus
time with the +repair
flag).
Does anyone know how to debug this more in detail ?
Regards,
Marcus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
` for all 3 interfaces (admin, public,
internal). Idk why, but that also solved the problem for me.
Greetings,
Marcus
-----
Marcus Bahn
Fraunhofer-Institut für Algorithmen
und Wissenschaftliches Rechnen - SCAI
Schloss Birlinghoven
53757
ig set mgr ...` right?
cephadm version
Using recent ceph image quay.io/ceph/ceph@sha256:xxx
ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
OpenStack Version: Wallaby
I hope that everything is included that's needed.
Thanks and best regards,
Marcus
--
snapshots, compression, etc?
>
> You might want to consider recordsize / blocksize for the dataset where it
> would live:
>
> https://www.reddit.com/r/zfs/comments/8l20f5/zfs_record_size_is_smaller_really_better/
>
>> On Mar 2, 2022, at 10:59 AM, Marcus Mül
Hi all,
are there any recommendations for suitable filesystems for ceph monitors ?
In the past we always deployed them on ext4, but would be ZFS possible as well?
Regards,
Marcus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
21 matches
Mail list logo