hi guys,

finally this issue got cleared :)

it was the problem with my xen server which had a *Stale NFS file handle.*

*found that **copy_vhd_from_secondarystorage.sh was an old file which was
used for 4.2, hence the size was going null so had to replace it.*

had to modify the *copy_vhd_from_secondarystorage.sh *to throw the
exception in *SMlog.*

Apr 29 16:31:06 XenMaddy SM: [5729] ['bash',
'/opt/cloud/bin/copy_vhd_from_secondarystorage.sh',
'10.10.171.121:/kimisecondary/template/tmpl/1/1/',
'82f01cc2-3b35-d523-7c15-027ea933fada',
'cloud-44fc42e8-15e7-4634-8bfd-bd3a4ad7c366']
Apr 29 16:31:06 XenMaddy SM: [5756] ['uuidgen', '-r']
Apr 29 16:31:06 XenMaddy SM: [5756]   pread SUCCESS
*Apr 29 16:31:07 XenMaddy SM: [5756] lock: acquired
/var/lock/sm/82f01cc2-3b35-d523-7c15-027ea933fada/sr*
*Apr 29 16:31:07 XenMaddy SM: [5756] Raising exception [47, The SR is not
available]*
*Apr 29 16:31:07 XenMaddy SM: [5756] lock: released
/var/lock/sm/82f01cc2-3b35-d523-7c15-027ea933fada/sr*
*Apr 29 16:31:07 XenMaddy SM: [5756] ***** generic exception: vdi_create:
EXCEPTION SR.SROSError, The SR is not available*
Apr 29 16:31:07 XenMaddy SM: [5756]   File
"/opt/xensource/sm/SRCommand.py", line 106, in run
Apr 29 16:31:07 XenMaddy SM: [5756]     return self._run_locked(sr)
Apr 29 16:31:07 XenMaddy SM: [5756]   File
"/opt/xensource/sm/SRCommand.py", line 147, in _run_locked
Apr 29 16:31:07 XenMaddy SM: [5756]     target = sr.vdi(self.vdi_uuid)
Apr 29 16:31:07 XenMaddy SM: [5756]   File "/opt/xensource/sm/NFSSR", line
223, in vdi
Apr 29 16:31:07 XenMaddy SM: [5756]     return NFSFileVDI(self, uuid)
Apr 29 16:31:07 XenMaddy SM: [5756]   File "/opt/xensource/sm/VDI.py", line
102, in __init__
Apr 29 16:31:07 XenMaddy SM: [5756]     self.load(uuid)
Apr 29 16:31:07 XenMaddy SM: [5756]   File "/opt/xensource/sm/FileSR.py",
line 401, in load
Apr 29 16:31:07 XenMaddy SM: [5756]     raise
xs_errors.XenError('SRUnavailable')
Apr 29 16:31:07 XenMaddy SM: [5756]   File
"/opt/xensource/sm/xs_errors.py", line 49, in __init__
Apr 29 16:31:07 XenMaddy SM: [5756]     raise SR.SROSError(errorcode,
errormessage)
Apr 29 16:31:07 XenMaddy SM: [5756]
Apr 29 16:31:07 XenMaddy SM: [5756] lock: closed
/var/lock/sm/82f01cc2-3b35-d523-7c15-027ea933fada/sr

*[root@XenMaddy sr-mount]# df -h*
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             4.0G  3.4G  361M  91% /
none                  373M   20K  373M   1% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                       52M   52M     0 100% /var/xen/xc-install
//10.11.1.251/qa      366G  340G   27G  93%
/var/run/sr-mount/f17836d0-2a7b-ae2b-7f4c-c9ecb851eebe
*df: `/var/run/sr-mount/82f01cc2-3b35-d523-7c15-027ea933fada': Stale NFS
file handle -------------------------------------------> culprit.*
/dev/mapper/XSLocalEXT--0e2f9ac5--2075--8e1e--bd46--709c230d94ea-0e2f9ac5--2075--8e1e--bd46--709c230d94ea
                      268G  2.4G  252G   1%
/var/run/sr-mount/0e2f9ac5-2075-8e1e-bd46-709c230d94ea
*10.10.171.121:/kimivolnfs*
*                       47G  2.5G   45G   6%
/var/run/sr-mount/7bd9654c-18f3-f8af-de23-ebfc68709e89  ------> my mounted
nfs primary storage.*

*bottom line was , all i needed to restart my xenserver(host).*
cheers!
thanks.


On Tue, Apr 29, 2014 at 1:37 PM, Punith S <punit...@cloudbyte.com> wrote:

> hi guys,
>
> i'm still going through the same problem,
> WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-43:ctx-12763945)
> destoryVDIbyNameLabel failed due to there are 0 VDIs with name
> cloud-fc59061b-d652-4210-920e-0e2fd587ba49
> *WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-43:ctx-12763945)
> can not create vdi in sr 58dd7e27-32c7-caab-e2b3-cdca45eefa7d*
> *WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-43:ctx-12763945)
> Catch Exception com.cloud.utils.exception.CloudRuntimeException for
> template +  due to com.cloud.utils.exception.CloudRuntimeException: can not
> create vdi in sr 58dd7e27-32c7-caab-e2b3-cdca45eefa7d*
> *com.cloud.utils.exception.CloudRuntimeException: can not create vdi in sr
> 58dd7e27-32c7-caab-e2b3-cdca45eefa7d*
> * at
> com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.copy_vhd_from_secondarystorage(XenServerStorageProcessor.java:848*
> )
>  at
> com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.copyTemplateToPrimaryStorage(XenServerStorageProcessor.java:918)
> at
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:75)
>  at
> com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:50)
> at
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:609)
>  at
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:59)
> at
> com.cloud.hypervisor.xen.resource.XenServer610Resource.executeRequest(XenServer610Resource.java:106)
>  at
> com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
> at
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>  at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>  at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
> at
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>  at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>  at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> *i checked the logs on the xenserver 6.2 and also went through
> the copy_vhd_from_secondarystorage.sh *
>
> *in /var/log/SMlog*
> *it is not throwing any exceptions !! *
>
> *Apr 29 12:36:36 XenMaddy SM: [20698] ['bash',
> '/opt/cloud/bin/copy_vhd_from_secondarystorage.sh',
> '10.10.171.121:/kimisecondary/template/tmpl/1/1/',
> '58dd7e27-32c7-caab-e2b3-cdca45eefa7d',
> 'cloud-32ee04b4-e1d1-413a-8a84-8ed9aca765db']*
> *Apr 29 12:36:36 XenMaddy SM: [20698]   pread SUCCESS*
> Apr 29 12:36:36 XenMaddy SM: [20738] ['bash',
> '/opt/cloud/bin/copy_vhd_from_secondarystorage.sh', 
> '10.10.171.121:/kimisecondary/template/tmpl/1/1/',
> '82f01cc2-3b35-d523-7c15-027ea933fada',
> 'cloud-46eaca8b-02f5-442f-9677-026f1c435134']
> Apr 29 12:36:36 XenMaddy SM: [20738]   pread SUCCESS
> Apr 29 12:36:37 XenMaddy SM: [20773] ['bash',
> '/opt/cloud/bin/kill_copy_process.sh', '']
> Apr 29 12:36:37 XenMaddy SM: [20773]   pread SUCCESS
> Apr 29 12:36:37 XenMaddy SM: [20783] ['bash',
> '/opt/cloud/bin/kill_copy_process.sh', '']
> Apr 29 12:36:37 XenMaddy SM: [20783]   pread SUCCESS
> Apr 29 12:36:42 XenMaddy SM: [20835] sr_scan {'sr_uuid':
> 'f17836d0-2a7b-ae2b-7f4c-c9ecb851eebe', 'subtask_of':
> 'DummyRef:|bab228e3-5b2d-8cb9-5990-b5ebaee1c53e|SR.scan', 'args': [],
> 'host_ref': 'OpaqueRef:47203046-d88d-dc82-10f8-bdf65eb7f942',
> 'session_ref': 'OpaqueRef:0bf3a887-af6a-89e3-b288-240c227fa25a',
> 'device_config': {'username': 'administrator', 'cifspassword_secret':
> '68ec1c7a-d3eb-0a25-8ad7-9ee6e3cd5da0', 'iso_path': "/ISO's", 'SRmaster':
> 'true', 'type': 'cifs', 'location': '//10.11.1.251/qa'}, 'command':
> 'sr_scan', 'sr_ref': 'OpaqueRef:12991abf-79a1-3743-2fd5-f6f8c8c7e2a6',
> 'local_cache_sr': '0e2f9ac5-2075-8e1e-bd46-709c230d94ea'}
>
>
> *but when i checked other xapi messages log*
>
> *Apr 29 12:39:36 XenMaddy xapi: [ info|XenMaddy|156760 UNIX
> /var/xapi/xapi||cli] xe vdi-create password=null
> sr-uuid=58dd7e27-32c7-caab-e2b3-cdca45eefa7d virtual-size=MiB type=user
> name-label=cloud-00f82d04-faf5-4343-82b8-26b492dbc481 username=root*
>
> you can see the *size* is going *NULL!! instead it should go 2500MiB.*
>
> *hence the VDI is not getting created.*
>
> *in the **copy_vhd_from_secondarystorage.sh file , i can see this query *
> *size=$($VHDUTIL query -v -n $vhdfile) *
>
> *i also reffered this blog by ian *
> http://dlafferty.blogspot.in/2013/08/using-cloudstacks-log-files-xenserver.html
> * which has similar problem, hence updated my vhd util. where his SMlog
> has the exception unlike in mine.*
>
> *i can't find weather something is wrong in my xen or my secondary storage
> !!*
> *help!!*
>
> *thanks.*
>
>
> On Mon, Apr 28, 2014 at 10:50 AM, Punith S <punit...@cloudbyte.com> wrote:
>
>> hi sanjay,
>>
>> it seems vhd-util is present in the xenserver
>>
>> [root@XenMaddy bin]# which vhd-util
>> /opt/xensource/bin/vhd-util
>>
>> -rwxr-xr-x 1 root root  312K Jan 10 00:03 vhd-util
>>
>> i also copied the .vhd file from secondary storage manually
>> to /opt/xensource/bin/ , its still not working out,
>> pinging my storage server from hypervisor is working fine.
>>
>> is this vhd-util obsolete ?
>>
>> --
>> regards,
>>
>> punith s
>> cloudbyte.com
>>
>
>
>
> --
> regards,
>
> punith s
> cloudbyte.com
>



-- 
regards,

punith s
cloudbyte.com

Reply via email to