That¹s great Benoit. You could have also found this out running the ssvm health check script. Thumb rule - In case of ssvm/secondary storage related issues first troubleshooting step is to run ssvm health check script
On 04/10/13 1:37 AM, "benoit lair" <kurushi4...@gmail.com> wrote: >Hello, > >Issue resolved for me. > >In my case, my dashboard said me, i have 0kb of storage on my secondary >storage. > >The reason was not due to the cs 4.2, but it was entirely my fault due to >a >misconfiguration : > >My ssvm was correctly deploying, but it can't communicate with my nfs >server. > >The reason was because i misconfigured my storage traffic label for >xenserver, so my storage traffic was routed on the bad nic on the >hypervisor, so the storage vlan was not available et so the ssvm cloudn't >ping to the nfs server. > >If the ssvm can't ping to the nfs server, it can't mount it, if can't >mount, it does not recognize the right size of the nfs server volume, so >0kb. > >So, if dashboard says secondary storage at 0kb, there is probably a >network >issue with my nfs server. > > >Regards, Benoit. > > >2013/10/4 Nitin Mehta <nitin.me...@citrix.com> > >> This generally should happen because of management server not able to >> communicate with the SSVM agent. >> Could you run the ssvm health check script to see if the connection is >> healthy btwn ssvm and CS ? >> >> On 03/10/13 3:39 AM, "benoit lair" <kurushi4...@gmail.com> wrote: >> >> >Hello, >> > >> >I have the same issue with cs 4.2 installed from the cloudstack rpm >> >repositories (the stable ones), found no way to get an ssvm >>operationnal. >> > >> >Thanks. >> > >> > >> >2013/9/28 Netmaster <webmas...@colobridge.net> >> > >> >> Hallo, >> >> we have several installation of cloudstack 4.2 >> >> In dashboard the capacity of secondary storage equal 0kb >> >> We have standart nfs. >> >> Api backed the same error, capacity of resource type 6 is 0 >> >> The secondary stor vm (system vm) shows normall capacity of 10tb, >> >> as it have to be. >> >> Is it bug or known issue? >> >> Thanks >>