Hi guys,
We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold data)
that we intend to grow later on as more storage is needed. We would very
much like to use Erasure Coding for some pools but are facing some
challenges regarding the optimal initial profile “replication” settings
giv
Great info! Many thanks!
Tom
2015-03-25 13:30 GMT+01:00 Loic Dachary :
> Hi Tom,
>
> On 25/03/2015 11:31, Tom Verdaat wrote:> Hi guys,
> >
> > We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold
> data) that we intend to grow later on as mo
Hi all,
I've set up a new Ceph cluster for testing and it doesn't seem to be
working out-of-the-box. If I check the status it tells me that of the 3
defined OSD's, only 1 is in:
health HEALTH_WARN 392 pgs degraded; 392 pgs stuck unclean
>monmap e1: 3 mons at {controller-01=
> 10.20.3.110:6
That was it!
Sorry the 10.20.4.x NICs weren't configured correctly on those two nodes.
I'll admit this one was definitely my mistake.
Thanks for pointing it out.
Tom
2013/7/9 Gregory Farnum
> On Tue, Jul 9, 2013 at 3:08 AM, Tom Verdaat wrote:
> > Hi all,
> >
&
Hi guys,
We want to use our Ceph cluster to create a shared disk file system to host
VM's. Our preference would be to use CephFS but since it is not considered
stable I'm looking into alternatives.
The most appealing alternative seems to be to create a RBD volume, format
it with a cluster file sy
You are right, I do want a single RBD, formatted with a cluster file
system, to use as a place for multiple VM image files to reside.
Doing everything straight from volumes would be more effective with regards
to snapshots, using CoW etc. but unfortunately for now OpenStack nova
insists on having
nd persistent volumes, not running instances which is
what my question is about.
2013/7/12 Alex Bligh
> Tom,
>
> On 11 Jul 2013, at 22:28, Tom Verdaat wrote:
>
> > Actually I want my running VMs to all be stored on the same file system,
> so we can use live migration to mo
the alternative is to mount a shared filesystem
> > on /var/lib/nova/instances of every compute node. Hence the RBD +
> > OCFS2/GFS2 question.
> >
> >
> > Tom
> >
> >
> > p.s. yes I've read the rbd-openstack page which covers images and
> > per