Hi,
I have a small cluster with only three nodes, 4 OSDs + 3 OSDs. I have been
running version 0.87.2 (Giant) for over 2.5 year, but a couple of day ago I
upgraded to 0.94.10 (Hammer) and then up to 10.2.7 (Jewel). Both the
upgrades went great. Started with monitors, osd and finally mds. The log
sh
Hi,
We actually using 3xIntel Server with 12 osds and One supermicro with 24 osds
in One ceph Cluster Journals on nvme per server. Did not seeing any issues jet.
Best
Mehmet
Am 9. Juni 2017 19:24:40 MESZ schrieb Deepak Naidu :
>Thanks David for sharing your experience, appreciate it.
>
>--
>De
On Fri, Jun 16, 2017 at 4:05 PM, Stéphane Klein
wrote:
> Hi,
>
> I would like to use CephFS kernel module on CentOS 7
>
> I use Atomic version of CentOS.
>
> I don't know where is the CephFS kermel module rpm package.
>
> I have installed: libcephfs1-10.2.7-0.el7.x86_64 package and I have this:
>
I have an eight node ceph cluster running Jewel 10.2.5.
One Ceph-Deploy node. Four OSD nodes and three Monitor nodes.
Ceph-Deploy node is r710T
OSD's are r710a, r710b, r710c, and r710d.
Mon's are r710e, r710f, and r710g.
Name resolution is in Hosts file on each node.
Successfully removed Monitor r
Has anyone run into such config where a single client consumes storage from
several ceph clusters, unrelated to each other (different MONs and OSDs,
and keys)?
We have a Hammer and a Jewel cluster now, and this may be a way to have
very clean migrations.
Best regards,
Alex
Storcium
--
--
Alex Go
Do you have firewall on on new server by any chance?
On Sun, Jun 18, 2017 at 8:18 PM, Jim Forde wrote:
> I have an eight node ceph cluster running Jewel 10.2.5.
>
> One Ceph-Deploy node. Four OSD nodes and three Monitor nodes.
>
> Ceph-Deploy node is r710T
>
> OSD’s are r710a, r710b, r710c, and
> Hi Alex,
>
> Have you experienced any problems with timeouts in the monitor action in
> pacemaker? Although largely stable, every now and again in our cluster the
> FS and Exportfs resources timeout in pacemaker. There's no mention of any
> slow requests or any peering..etc from the ceph logs so