On Fri, Nov 9, 2018 at 3:17 AM Bill Kenworthy <bi...@iinet.net.au> wrote: > > I'll second your comments on ceph after my experience - great idea for > large scale systems, otherwise performance is quite poor on small > systems. Needs at least GB connections with two networks as well as only > one or two drives per host to work properly. > > I think I'll give lizardfs a go - an interesting read. >
So, ANY distributed/NAS solution is going to want a good network (gigabit or better), if you care about performance. With Ceph and the rebuilds/etc it probably makes an even bigger difference, but lizardfs still shuttles data around. With replication any kind of write is multiplied so even moderate use is going to use a lot of network bandwidth. If you're talking about hosting OS images for VMs it is a big deal. If you're talking about hosting TV shows for your Myth server or whatever, it probably isn't as big a deal unless you have 14 tuners and 12 clients. Lizardfs isn't without its issues. For my purposes it is fine, but it is NOT as robust as Ceph. Finding direct comparisons online is difficult, but here are some of my observations (having not actually used either, but having read up on both): * Ceph (esp for obj store) is designed to avoid bottlenecks. Lizardfs has a single master server that ALL metadata requests have to go through. When you start getting into dozens of nodes that will start to be a bottleneck, but it also eliminates some of the rigidity of Ceph since clients don't have to know where all the data is. I imagine it adds a bit of latency to reads. * Lizardfs defaults to acking writes after the first node receives them, then replicates them. Ceph defaults to acking after all replicas are made. For any application that takes transactions seriously there is a HUGE data security difference, but it of course will lower write latency for lizardfs. * Lizardfs makes it a lot easier to tweak storage policy at the directory/file level. Cephfs basically does this more at the mountpoint level. * Ceph CRUSH maps are much more configurable than Lizardfs goals. With Ceph you could easily say that you want 2 copies, and they have to be on hard drives with different vendors, and in different datacenters. With Lizardfs combining tags like this is less convenient, and while you could say that you want one copy in rack A and one in rack B, you can't say that you don't care which 2 as long as they are different. * The lizardfs high-availability stuff (equiv of Ceph monitors) only recently went FOSS, and probably isn't stabilized on most distros. You can have backup masters that are ready to go, but you need your own solution for promoting them. * Lizardfs security seems to be non-existent. Don't stick it on your intranet if you are a business. Fine for home, or for a segregated SAN, maybe, or you could stick it all behind some kind of VPN and roll your own security layer. Ceph security seems pretty robust, but watching what the ansible playbook did to set it up makes me shudder at the thought of doing it myself. Lots of keys that all need to be in sync so that everything can talk to each other. I'm not sure if for clients whether it can outsource authentication to kerberos/etc - not a need for me but I wouldn't be surprised if this is supported. The key syncing makes a lot more sense within the cluster itself. * Lizardfs is MUCH simpler to set up. For Ceph I recommend the ansible playbook, though if I were using it in production I'd want to do some serious config management as it seems rather complex and it seems like the sort of thing that could take out half a datacenter if it had a bug. For Lizardfs if you're willing to use the suggested hostnames about 95% of it is auto-configuring as storage nodes just reach out to the default master DNS name and report in, and everything trusts everything (not just by default - I don't think you even can lock it down unless you stick every node behind a VPN to limit who can talk to who). -- Rich