Has anyone looked at StorPool? They seem to be doing an OnApp-storage like setup…
Regards, Frank > On 17 Jun 2016, at 16:24, Dustin Wright <[email protected]> > wrote: > > Thank you for the valuable feedback. I have been considering the same setup > Jeroen proposed. > > I think what most of us want is something like onapp integrated storage. At > least that is what it sounds like to me. > > It would be neat if ACS had a system vm baked in for running ceph, or some > type of distributed storage. Either VM or object storage. IMO, Running big > expensive enterprise SAN for your block storage doesn't feel very > cloud-like. > > Dustin > > On Fri, Jun 17, 2016 at 9:59 AM, Stephan Seitz < > [email protected]> wrote: > >> Hi! >> >> Independently from cloudstack, I'ld strongly recommend to not use ceph >> and hypervisors on the very same machines. If you just want to build a >> POC this is fine, but If you put load on it, you'll see unpredictible >> behavior (at least on the ceph side) due to heavy I/O demands. >> Ceph recommends at least 1 Core and 1 GB RAM as a rule of thumb for >> each OSD. >> BTW. I also won't run a ceph cluster with only two nodes. Your MON >> should be able to form a quorum, so you'ld need at least three nodes. >> >> If you run a cluster with less than about 6 or 8 nodes, I'ld give >> gluster a try. I've never tried it myself but I assume this should >> be usable as "pre-setup" Storage at least with KVM Hosts. >> >> cheers, >> >> - Stephan >> >> >> >> Am Freitag, den 17.06.2016, 13:36 +0200 schrieb Jeroen Keerrel: >>> Good afternoon from Hamburg, Germany! >>> >>> Short question: >>> Is it feasible to use CloudStack with Ceph on local storage? As in >>> “hyperconverged”? >>> >>> Before ramping up the infrastructure, I’d like to be sure, before >>> buying new hardware. >>> >>> At the moment: 2 Hosts, each 2 6c XEON CPU, 24GB RAM and each have 6 >>> 300GB SAS drives. >>> >>> According to CEPH, they advise bigger disks and separate storage >>> “nodes”. >>> CloudStack documentation says: Smaller, High RPM disks. >>> >>> What would you advise? Buy separate “Storage Nodes” or ramp up the >>> current nodes? >>> >>> Cheers! >>> Jeroen >>> >>> >>> >>> Jeroen Keerl >>> Keerl IT Services GmbH >>> Birkenstraße 1b . 21521 Aumühle >>> +49 177 6320 317 >>> www.keerl-it.com >>> [email protected] >>> Geschäftsführer. Jacobus J. Keerl >>> Registergericht Lubeck. HRB-Nr. 14511 >>> Unsere Allgemeine Geschäftsbedingungen finden Sie hier. >>> >>> >>
