That package probably contains the vfs_ceph module for Samba. However,
further down, the same page says:
> The above share configuration uses the Linux kernel CephFS client, which is
> recommended for performance reasons.
> As an alternative, the Samba vfs_ceph module can also be used to communic
We are using SL7 to export our cephfs via samba to windows. The
RHEL7/Centos7/SL7 distros do not come with packages for the samba
cephfs module. This is one of the reasons why we are mounting the file
system locally using the kernel cephfs module with the automounter and
reexporting it using vanill
Hi ciphers,
We have two ceph clusters in our lab. We are experimenting to use single
server as a client for two ceph clusters. Can we use the same client server
to store keyring for different clusters in ceph.conf file. Another query
is can we use a single client with multiple vms in it for two
Dear Ceph users,
I would like to ask if it is possible to deploy Ceph Octopus in Centos 7.
waiting for your best reply
Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Oh thanks Magnus for clearing this up. I thought that there was some other
fancy config.
Sent: Monday, 12 July 2021 9:40 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: samba cephfs
We are using SL7 to export our cephfs via samba to windows. The
RHEL
I have now opened a bug issue as this must be a bug with cephadm:
https://tracker.ceph.com/issues/51629
Hopefully someone has time to look into that.
Thank you in advance.
‐‐‐ Original Message ‐‐‐
On Friday, July 9th, 2021 at 8:11 AM, mabi wrote:
> Hello,
>
> I rebooted all the 8 nod
Hi everyone, something strange here with bucket resharding vs. bucket
listing.
I have a bucket with about 1M objects in it, I increased the bucket
quota from 1M to 2M, and manually resharded from 11 to 23. (dynamic
resharding is disabled)
Since then, the user can't list objects in some paths.
Hello Cephers,
I'm disappointed. I thought I'v had found a good way to migrate from one data
pool to
another, without too much downtime.
I use XFS on RBD, via KRBD to store backups (see another thread). XFS with
reflink and crc
(accelerate Veeam merges).
Also, I want to migrate from a EC k3m2
FWIW I’ve corresponded with someone else who has had more success with this
route than with vfs_ceph, especially when using distributions for which it is
not prepackaged.
> On Jul 12, 2021, at 7:09 AM, Marc wrote:
>
> Oh thanks Magnus for clearing this up. I thought that there was some other
> Hi ciphers,
>
> We have two ceph clusters in our lab. We are experimenting to use single
> server as a client for two ceph clusters. Can we use the same client server
> to store keyring for different clusters in ceph.conf file.
The keys are usually in their own files in /etc/ceph, not in ceph
10 matches
Mail list logo