I don't think 4.2 is coming out anytime soon. Right now all 4.1 builds are
stable except for the systemvm. Have a look in their jenkins server
http://jenkins.cloudstack.org/. 4.2 is not even listed there. :(

Best Regards,
Dewan Shamsul Alam



On Thu, May 16, 2013 at 9:04 PM, Wido den Hollander <w...@42on.com> wrote:

> On 05/16/2013 04:34 PM, Dewan Shamsul Alam wrote:
>
>> Hi,
>>
>> I will be deploying CloudStack and will use Ceph. Too bad CloudStack
>> requires NFS as the primary storage for System VM. So I have to use a
>> DRBD+NFS for that. The setup is as follows:
>>
>
> Wait for CloudStack 4.2 :) As we speak I'm working on the new features for
> CloudStack 4.2, which will bring:
> - Cloning (Layering) of templates
> - Snapshotting
> - Running SystemVMs of RBD
>
> The last was made possible by removing the so called "patch disk" in
> CloudStack.
>
> When a SystemVM boots up it requires metadata, like what his IP-address
> should be, where the management server can be found, etc, etc. We used to
> generate a file (Yes, FILE) on the Primary Storage which was attached as an
> extra disk. As you can guess, RBD images are no files and the Bash script
> which did this didn't understand RBD.
>
> The new way is that we open a VirtIO Serial Console to the SystemVM on the
> Hypervisor where it is running and over that Serial Console the SystemVM
> will get his metadata.
>
> This way we can deploy System VMs from Ceph without the need for NFS or
> anything else.
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
>
>> 3 Node Ceph Cluster [Bobtail] - Planning to upgrade to Cuttle fish after
>> trying this.
>> 2 Node for DRBD+NFS
>> 1 VM for the management system - I'm not worried about high availability
>> of this node at this point.
>> 3 Compute nodes
>> All commodity hardware backed by a gigabit switch
>>
>> I will have this setup running after next week. I will let you guys know
>> how it went.
>>
>> Best Regards,
>> Dewan Shamsul Alam
>>
>>
>>
>>
>> On Thu, May 16, 2013 at 8:02 PM, Patrick McGarry <patr...@inktank.com
>> <mailto:patr...@inktank.com>> wrote:
>>
>>     Greetings ceph-ers,
>>
>>     As you may have noticed lately, there has been a lot of talk about
>>     Ceph and OpenStack.  While we love all of the excitement that this has
>>     generated, we want to make sure that other cloud setups aren't getting
>>     neglected or ignored.  CloudStack, for instance, also has a great Ceph
>>     integration thanks to some enterprising work from Wido at 42on.com
>>     <http://42on.com>.
>>
>>
>>     So, are you using CloudStack and Ceph?  If so we'd love to hear from
>>     you.  Whether it's just a quiet note for our eyes only, or whether you
>>     have a story to share with the world, we'd love to know.  Of course,
>>     we'd love to hear about anything you're working on.  So, if you have
>>     notes to share about Ceph with other cloud flavors, massive storage
>>     clusters, or custom work, we'd treasure them appropriately.
>>
>>     Feel free to just reply to this email, send a message to
>>     commun...@inktank.com <mailto:commun...@inktank.com>**, message
>>     'scuttlemonkey' on irc.oftc.net <http://irc.oftc.net>, or tie
>>
>>     a note to our ip-over-carrier-pigeon network.  Thanks, and happy
>>     Ceph-ing.
>>
>>
>>     Best Regards,
>>
>>     Patrick McGarry
>>     Director, Community || Inktank
>>
>>     http://ceph.com  || http://inktank.com
>>     @scuttlemonkey || @ceph || @inktank
>>     ______________________________**_________________
>>     ceph-users mailing list
>>     ceph-users@lists.ceph.com 
>> <mailto:ceph-us...@lists.ceph.**com<ceph-users@lists.ceph.com>
>> >
>>     
>> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>>
>>
>>
>>
>> ______________________________**_________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>>
> ______________________________**_________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to