On Wed, Feb 20, 2013 at 4:31 AM, Chandan Purushothama
<chandan.purushoth...@citrix.com> wrote:
> Hello Rohit,
>
> Should the procedure of Building the SystemVM template appliance be tested?

Sure.

>Or Should the templates produced at the end be tested for their functionality?

This is more important. I've opened and assigned myself an issue on
this, but you're welcome to help:
https://issues.apache.org/jira/browse/CLOUDSTACK-1340

>I referred to your blog and found information pertaining to challenges that 
>you faced during building the template but didn't find information pertaining 
>to procedure.

So, we just used veewee to do the building and then used couple of
other tools to export the final image to various formats. But
procedure to what exactly, building systemvms, or how to diy systemvm?
For building systemvms, you read the README.md file in
tools/appliance, you just need to setup a bunch of dependencies and
call ./build.sh for automatic building and export, or just:
    veewee vbox build 'systemvmtemplate'
    veewee vbox halt 'systemvmtemplate'

And export the appliance manually via vboxmanage, vbox manager. qemu-img etc.

>May I know the location where I can find documentation pertaining to the 
>procedure and what needs to be tested as a QA effort?

tools/appliance/README.md
Wiki, I'll set one up soon. For QA effort, we just need to test that
all the exported formats work fine on their respective hypervisor, a
basic zone deployment and running vm, network/router life cycles
should be enough for starters.

Regards.

>
> Thank you,
> Chandan.
>
> -----Original Message-----
> From: rohityada...@gmail.com [mailto:rohityada...@gmail.com] On Behalf Of 
> Rohit Yadav
> Sent: Tuesday, February 19, 2013 2:31 AM
> To: cloudstack-dev@incubator.apache.org
> Subject: Re: Building SystemVM template appliance
>
> On Tue, Feb 19, 2013 at 8:23 AM, Chiradeep Vittal 
> <chiradeep.vit...@citrix.com> wrote:
>> Hi Rohit
>>
>>>>
>>>> Are the format conversions automated?
>>>
>>>They are now! The file name format is:
>>>$appliance-$build_date-$branch-hyperv.$format
>>>Format conversion and archiving is automated on a large instance with
>>>export formats:
>>>ova -> vmware
>>>vhd -> xen
>>>qcow2 -> kvm
>>>vhd -> hyperv
>>>
>>>Build job:
>>>http://jenkins.cloudstack.org/view/All/job/build-systemvm-master
>>>
>>>But I'm yet to try one of xen or kvm templates. The bugs would be
>>>mostly fixable in postinstall.sh
>>>
>>>For exporting xen images from raw image, I had to build xen (for a
>>>library) and vhd-util from blktap2/tools. Will post a blog on that
>>>exercise.
>>
>> I used to use a patch from the Xen ML to enable vhd-util to do
>> conversions, but IIRC this has been merged into master. If others
>> (outside of the build server) want to use vhd-util what are their options?
>>
>>>
>>>I've fixed the partition as regular one with different partitions for
>>>/var, /opt, /usr, /tmp, /boot, /home, please see the partitions sizes,
>>>advise if we want to change them: (syntax: min. size, priority, max.
>>>size)
>>>https://git-wip-us.apache.org/repos/asf?p=incubator-cloudstack.git;a=c
>>>ommi t;h=ab63a433ecbf60e18ad6cbcb0353c61fa432bcdc
>>
>> Looks good. Glad to see swap space. Have dealt with a few OOMs when
>> the system vm was FC8 and did not have swap.
>>
>>>
>>>Chiradeep, we can copy stuff inside the appliance using veewee vbox
>>>ssh <command> or scping before we halt using veewee vbox halt
>>><machine>. How do you want to use the config.dat,
>>
>> We should not need config.dat now that the debconf-set-selections
>> trick is used.
>
> First automated build (the whole rvm non-interactive non-login shell issue 
> took me a lot of time to figure out, the bug at jenkins was that $HOME was 
> not defined for rvm):
> http://jenkins.cloudstack.org/job/build-systemvm-master/47/console
>
> Archived appliances:
> http://people.apache.org/~bhaisaab/systemvm
>
> Blog:
> http://rohityadav.in/logs/building-systemvms/
>
> Now, all we need to do is test them against real hosts and fix any post 
> installation scripts, network conf.
>
> Regards.
>
>>
>>

Reply via email to