Hello guys,

I'm trying cs 4.2.0 installed with the cloudstack rpm repositories onto a
centos 63 mgmt server.

I've created a zone, a pod, a cluster with a xenserver 6.2 host.

Created my networks, deployed the sysvm template (with the last version,
not the one into he docs)

I got an issue with the ssvm

My dashboard says me i have 275MB of storage available, that is the size of
the rootfs of the ssvm.
When i ran ssh against my ssvm (ssh -i /root/.ssh/id_rsa.cloud -p 3922
root@169.254.0.82)

i see the following with df -h :

root@s-1-VM:~# df -h
Filesystem                                              Size  Used Avail
Use% Mounted on
rootfs                                                  276M  118M  145M
45% /
udev                                                     10M     0   10M
0% /dev
tmpfs                                                    25M  152K   25M
1% /run
/dev/disk/by-uuid/3bbaf5c6-5317-468b-9742-0e68c65ad565  276M  118M  145M
45% /
tmpfs                                                   5.0M     0  5.0M
0% /run/lock
tmpfs                                                    79M     0   79M
0% /run/shm
/dev/xvda1                                               30M   18M   11M
63% /boot
/dev/xvda6                                               53M  4.9M   45M
10% /home
/dev/xvda8                                              368M   11M  339M
3% /opt
/dev/xvda10                                              48M  4.9M   41M
11% /tmp
/dev/xvda7                                              610M  502M   77M
87% /usr
/dev/xvda9                                              415M  107M  287M
27% /var


That's not okay.

In fact, the ssvm mount his own / partition and provide it to the mgmt
server.
However, i configured an nfs server as secondary storage server with ip
182.20.0.57 with mount point /export/secondary.

Why do i not see my nfs server mounted onto the ssvm vm ?

So due to this bug i can't upload my vm templates and populate my templates
database.


Thanks a lot for your help.

Regards.

Benoit Lair.

Reply via email to