Hello,

 

We are using CloudStack in an environment with Citrix Xen (Enterprise
Edition) and Dell EqualLogic SANs. 

 

We are trying to take advantage of Citrix Storage Link features, especially
the feature that allows to have a thin provisioned volume/LUN on the SAN per
Vitrual Machine, rather than using LVM over ISCSI and a big LUN on the SAN
for all the VMs, and we want to use SAN-level snapshots available in
CloudStack

 

For the first part of our problem, we found that using "Presetup"
configuration for ISCSI connections in our SAN with StorageLink technology
allows us to have the desired outcome when creating the VMs from an ISO.
However, if we want to use templates, things are not working.

 

Further investigation showed that CloudStacks model involves copying of all
data from the "primary" storage to "secondary" storage when creating a
template, and copy from "secondary" storage to "primary" storage when
creating a VM from the template (Also, copy from "primary1" to "secondary"
to "primary2" when moving a VM from a primary storage to another primary
storage). It turns out that the scripts that facilitate this copying and
that reside on the Xen host (in /opt/xensource/bin, specifically
copy_vhd_to_secondarystorage.sh and copy_vhd_from_secondarystorage.sh) do
not support StorageLink operations.

 

>From our perspective (and please correct me if I am wrong), it appears these
are the only scripts that need to be modified to add StorageLink support to
CloudStack. We have successfully modified the script
copy_vhd_to_secondarystorage.sh to correctly copy from a StorageLink LUN to
the secondary storage (patch included below [1]), and we are successfully
able to migrate VMs from StorageLink LUN to an LVM over ISCSI LUN. We lack
the proper understanding of the technology to modify the
copy_vhd_from_secondarystorage.sh script to be able to perform the reverse
operation (and here we need your help). We are able to provide test
environment for somebody who would be able to help us.

 

Also, if this problem has already been solved but is not yet committed to
"production" version of CloudStack, we would gladly perform beta testing in
our environment. We are aware there is a bug opened about this issue in
CloudStack (bug CS-11486)

 

We have not yet pursued the second part of our problem (SAN-level snapshots
in CloudStack), but as we feel these problems are related, we are willing to
offer all the help we can in fixing them.

 

[1]

-- start here --

[root@a11-3-05 bin]# diff copy_vhd_to_secondarystorage.sh.orig
copy_vhd_to_secondarystorage.sh

41c41

<   echo "2#no uuid of the source sr"

---

>   echo "2#no uuid of the source vdi"

85a86,106

> elif [ $type == "cslg" -o $type == "equal" ]; then

>   idstr=$(xe host-list name-label=$(hostname) params=uuid)

>   hostuuid=$(echo $idstr | awk -F: '$1 != ""{print $2}' | awk '{print
$1}')

>   CONTROL_DOMAIN_UUID=$(xe vm-list is-control-domain=true
resident-on=$hostuuid params=uuid | awk '$1 == "uuid"{print $5}')

>   vbd_uuid=$(xe vbd-create vm-uuid=${CONTROL_DOMAIN_UUID}
vdi-uuid=${vdiuuid} device=autodetect) 

>   if [ $? -ne 0 ]; then

>     echo "999#failed to create VBD for vdi uuid ${uuid}"

>     cleanup

>     exit 0

>   fi

>   xe vbd-plug uuid=${vbd_uuid}

>   svhdfile=/dev/$(xe vbd-param-get uuid=${vbd_uuid} param-name=device)

>   dd if=${svhdfile} of=${vhdfile} bs=2M

>   if [ $? -ne 0 ]; then

>     echo "998#failed to dd $svhdfile to $vhdfile"

>     xe vbd-unplug uuid=${vbd_uuid}

>     xe vbd-destroy uuid=${vbd_uuid}

>     cleanup

>     exit 0

>   fi

> 

123a145,147

> xe vbd-unplug uuid=${vbd_uuid}

> xe vbd-destroy uuid=${vbd_uuid}

> 

-- end here --

 

 

Lucas Hughes

Cloud Engineer

Ecommerce Inc

 

Reply via email to