You can use multiple "steps" in your crush map in order to do things like choose two different hosts then choose a further OSD on one of the hosts and do another replication so that you can get three replicas onto two hosts without risking ending up with three replicas on a single node.

On 28/07/2014 18:14, Craig Lewis wrote:
That's expected. You need > 50% of the monitors up. If you only have 2 machines, rebooting one means that 50% are up, so the cluster halts operations. That's done on purpose to avoid problems when the cluster is divided in exactly half, and both halves continue to run thinking the other half is down. Monitors don't need a lot of resources. I'd recommend that you add a small box as a third monitor. A VM is fine, as long as it has enough IOPS to it's disks.

It's best to have 3 storage nodes. A new, out of the box install tries to store data on at least 3 separate hosts. You can lower the replication level to 2, or change the rules so that it will store data on 3 separate disks. It might store all 3 copies on the same host though, so lowering the replication level to 2 is probably better.

I think it's possible to require data stored on 3 disks, with 2 of the disks coming from different nodes. Editing the CRUSH rules is a bit advanced: http://ceph.com/docs/master/rados/operations/crush-map/




On Mon, Jul 28, 2014 at 9:59 AM, Don Pinkster <d...@pinkster.me <mailto:d...@pinkster.me>> wrote:

    Hi,

    Currently I am evalutating multiple distributed storage solutions
    with an S3-like interface.
    We have two huge machines with big amounts of storage. Is it
    possible to let these two behave exactly the same with Ceph? My
    idea is runninng both MON and OSD on these two machines.

    With quick tests the cluster is degrated after a reboot of 1 host
    and is not able to recover from the reboot.

    Thanks in advance!

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to