some background from the user group meetup.
https://youtu.be/Vcxk0lFa2S8?t=3821
you can check your current openssl with
> openssl version
List all what the package manager as to offer:
> dnf --showduplicates list openssl
"works on my machine" cephadm on Rocky 9.6 OpenSSL 3.2.2 4.
On Wed, 30 Ju
Syntax errors on the config?
Try to start manually with -x to be sure
what does the journal log has to say?
https://github.com/nfs-ganesha/nfs-ganesha/issues/730
Release notes:
https://github.com/nfs-ganesha/nfs-ganesha/wiki/ReleaseNotes_5
On Thu, 28 Nov 2024 at 12:35, Marc wrote:
> >
> > In
Possible that you just miss frontend_port?
https://docs.ceph.com/en/reef/cephadm/services/nfs/#high-availability-nfs
On Sat, 9 Nov 2024 at 21:43, Tim Holloway wrote:
> H. I have somewhat similar issues, and I'm not entirely happy with
> what I've got, but let me fill you in.
>
> Ceph suppo
Hey Dulux-Oz,
Care to share how you did it now?
vg/lv syntax or :/dev/vg_osd/lvm_osd ?
On Mon, 27 May 2024 at 09:49, duluxoz wrote:
> @Eugen, @Cedric
>
> DOH!
>
> Sorry lads, my bad! I had a typo in my lv name - that was the cause of
> my issues.
>
> My apologises for being so stupid - and *tha
I'm not using Ceph Ganesha but GPFS Ganesha, so YMMV
> ceph nfs export create cephfs --cluster-id nfs-cephfs --pseudo-path /mnt
> --fsname vol1
> --> nfs mount
> mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.7.80:/mnt /mnt/ceph
>
> - Although I can mount the export I can't write on it
You created
With cephadm you're able to set these values cluster wide.
See the host-management section of the docs.
https://docs.ceph.com/en/reef/cephadm/host-management/#os-tuning-profiles
On Fri, 19 Apr 2024 at 12:40, Konstantin Shalygin wrote:
> Hi,
>
> > On 19 Apr 2024, at 10:39, Pardhiv Karri wrote:
>
Hey Cephers,
Hope you're all doing well! I'm in a bit of a pickle and could really use
some of your power.
Here's the scoop:
I have a setup with around 10 HDDs. and 2 NVME's (+uninteresting boot
disks)
My initial goal was to configure part of the HDDs (6 out of 7TB) into an
md0 or similar device
Reef. If not,
> you'll need to use v17.2.6 until the fix comes out for quincy in v17.2.8.
>
> Travis
>
> On Thu, Nov 23, 2023 at 4:06 PM P Wagner-Beccard <
> wagner-kerschbau...@schaffroth.eu> wrote:
>
>> Hi Mailing-Lister's,
>>
>> I am reaching
Hi Mailing-Lister's,
I am reaching out for assistance regarding a deployment issue I am facing
with Ceph on a 4 node RKE2 cluster. We are attempting to deploy Ceph via
the rook helm chart, but we are encountering an issue that apparently seems
related to a known bug (https://tracker.ceph.com/issue