Tried to add it to ganesha.conf but didn't work out.
I tried using the default "ganesh-ceph.conf" file comes which comes with
"ganesha-ceph" installation is working fine.
I will try again using conf file provided in nfs-ganesha github.
On Fri, May 15, 2020 at 6:30 PM Daniel Gryniewicz wrote:
>
Hi,
I am using ceph Nautilus cluster with below configuration.
3 node's (Ubuntu 18.04) each has 12 OSD's, and mds, mon and mgr are running
in shared mode.
The client mounted through ceph kernel client.
I was trying to emulate a node failure when a write and read were going on
(replica2) pool.
>I thought it was a method (the method?) to know if a PG comes back from a
>crashed OSD/host, to know if it was up-to-date or old since it would have
>an older timestamp.
Thanks. That's a reasonable theory. Maybe I'll look in the code and see if
I can confirm it.
And it means on my cluster, onc
What’s your pool configuration wrt min_size and crush rules?
Zitat von Amudhan P :
Hi,
I am using ceph Nautilus cluster with below configuration.
3 node's (Ubuntu 18.04) each has 12 OSD's, and mds, mon and mgr are running
in shared mode.
The client mounted through ceph kernel client.
I was
Hi All,
I have question regarding Ceph Nautilus upgrade. In our test environment
upgrading
Luminous to Nautilus 14.2.8, and after enable msgr2, we seen one of the mon
node restarted, my question is this normal process of restart mon service
and 2nd question is, we using below mon_host format that