Hi all,
I have recently setup a Ceph cluster and on request using CephFS (MDS
version: ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988)
mimic (stable)) as a backend for NFS-Ganesha. I have successfully tested a
direct mount with CephFS to read/write files, however Im perplexed as
Hi,
on a Luminous 12.2.11 deploiement, my bluestore OSD exceed the
osd_memory_target :
daevel-ob@ssdr712h:~$ ps auxw | grep ceph-osd
ceph3646 17.1 12.0 6828916 5893136 ? Ssl mars29 1903:42
/usr/bin/ceph-osd -f --cluster ceph --id 143 --setuser ceph --setgroup ceph
ceph3991 1
We have also hybrid ceph/libvirt-kvm setup, using some scripts to do
live migration, do you have auto failover in your setup?
-Original Message-
From: jes...@krogh.cc [mailto:jes...@krogh.cc]
Sent: 05 April 2019 21:34
To: ceph-users
Subject: [ceph-users] VM management setup
Hi. Know
On Fri, 5 Apr 2019 at 17:42, Casey Bodley wrote:
>
> Hi Iain,
>
> Resharding is not supported in multisite. The issue is that the master zone
> needs to be authoritative for all metadata. If bucket reshard commands run on
> the secondary zone, they create new bucket instance metadata that the ma