On Thu, May 2, 2024 at 2:56 AM V A Prabha wrote:
>
> Dear Eugen,
> We have a scenario of DC and DR replication, and planned to explore RBD
> mirroring with both Journaling and Snapshot mechanism.
> I have a 5 TB storage at Primary DC and 5 TB storage at DR site with 2
> different
> ceph clusters
On Tue, May 7, 2024 at 7:54 AM Eugen Block wrote:
>
> Hi,
>
> I'm not the biggest rbd-mirror expert.
> As understand it, if you use one-way mirroring you can failover to the
> remote site, continue to work there but there's no failover back to
> primary site. You would need to stop client IO on DR
On Thu, Jul 21, 2022 at 10:28 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/56484
> Release Notes - https://github.com/ceph/ceph/pull/47198
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs, kcephfs
On Fri, Sep 23, 2022 at 6:41 PM Sagittarius-A Black Hole
wrote:
>
> Hi,
>
> The below fstab entry works, so that is a given.
> But how do I specify which Ceph filesystem I want to mount in this fstab
> format?
>
> 192.168.1.11,192.168.1.12,192.168.1.13:/ /media/ceph_fs/
> name=james_user, sec
On Tue, Oct 4, 2022 at 5:09 PM Ramana Krisna Venkatesh Raja
wrote:
>
> On Tue, Oct 4, 2022 at 5:01 PM Vladimir Brik
> wrote:
> >
> > Hello
> >
> > I think I may have run into a bug in cephfs that has
> > security implications. I am not sure it's a goo
Hi,
If performance is critical you'd want CephFS kernel clients to access
your CephFS volumes/subvolumes. On the other hand, if you can't trust
the clients in your cloud, then it's recommended that you set up a
gateway (NFS-Ganesha server) for CephFS. NFS-Ganesha server uses
libcephfs (userspace
On Fri, Nov 4, 2022 at 9:36 AM Galzin Rémi wrote:
>
>
> Hi,
> i'm looking for some help/ideas/advices in order to solve the problem
> that occurs on my metadata
> server after the server reboot.
You rebooted a MDS's host and your file system became read-only? Was
the Ceph cluster healthy before r
On Mon, Dec 19, 2022 at 11:14 AM Stefan Kooman wrote:
>
> On 12/19/22 16:46, Christoph Adomeit wrote:
> > Hi,
> >
> > we are planning an archive with cephfs containing 2 Petabytes of Data
> > on 200 slow S-ATA Disks on a single cephfs with 150 subdirectories. The
> > Disks
> > will be around 80%
On Mon, Dec 19, 2022 at 12:20 PM Ramana Krisna Venkatesh Raja
wrote:
>
> On Mon, Dec 19, 2022 at 11:14 AM Stefan Kooman wrote:
> >
> > On 12/19/22 16:46, Christoph Adomeit wrote:
> > > Hi,
> > >
> > > we are planning an archive with cephfs containin