Hi Joakim,

On Mon, Dec 5, 2016 at 1:35 PM <joa...@verona.se> wrote:

> Hello,
>
> I have a question regarding if Ceph is suitable for small scale
> deployments.
>
> Lets say I have two machines, connected with gbit lan.
>
> I want to share data between them, like an ordinary NFS
> share, but with Ceph instead.
>
> My idea is that with Ceph I would have redundancy with two machines
> having complete copies of the data. I also imagine that the
> performance could be quite okay in principle, depending on how Ceph
> works, which I'm not quite sure of.
>
> A use case would be to share my home directory on two machines, or
> three maybe.
>
> A workload I'm concerned with is software builds. Would Ceph be
> competetive in this use-case as compared with a local disk? As far as
> I can tell Ceph doesn't use the "eventual consistency" approach. Does
> that mean that all writes have to sync over all the nodes in the Ceph
> cluster, before the write can be considered complete? Or is one node
> enough?
>
> /Joakim
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


I have done such setups with one high quality machine. If stability is a
concern, without an absolute requirement for multi head node, simple is a
really good approach. You would have to assess downtime requirements, but I
can tell you that we have over 30 24/7 systems out there running not much
more that NFS on ZFS, and these are great workhorses, with little or no
downtime.

HA features to set up - good RAID controller (we like Areca), bonding,
ideally with 10GbE or more - active/backup or LACP. ECC RAM, and monitor
the hardware. Take good backups.

I have a decent lab Ceph setup with 3 OSD nodes and virtual machines as
MONs, but running off different storage. I would not do Ceph with 2 OSD
nodes.

As Christian replied, DRBD has been widely used for two node setup.

Good luck!
Alex
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to