Hello Dheerendra, Thanks for your replay. Well I see the following logs in cinder/cinder-volume.log:
2013-11-04 14:58:53 INFO [cinder.service] Starting 1 workers 2013-11-04 14:58:53 INFO [cinder.service] Started child 3536 2013-11-04 14:58:53 AUDIT [cinder.service] Starting cinder-volume node (version 2013.1.3) 2013-11-04 14:58:54 INFO [cinder.volume.iscsi] Creating iscsi_target for: volume-123b29db-61c4-4c06-9d66-a91d325c9154 2013-11-04 14:58:54 ERROR [cinder.volume.iscsi] Failed to create iscsi target for volume id:volume-123b29db-61c4-4c06-9d66-a91d325c9154. 2013-11-04 14:58:54 ERROR [cinder.service] Unhandled exception Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 227, in _start_child self._child_process(wrap.server) File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 204, in _child_process launcher.run_server(server) File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 95, in run_server server.start() File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 355, in start self.manager.init_host() File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 149, in init_host self.driver.ensure_export(ctxt, volume) File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", line 400, in ensure_export old_name=old_name) File "/usr/lib/python2.7/dist-packages/cinder/volume/iscsi.py", line 168, in create_iscsi_target raise exception.ISCSITargetCreateFailed(volume_id=vol_id) ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-123b29db-61c4-4c06-9d66-a91d325c9154. 2013-11-04 14:58:54 INFO [cinder.service] Child 3536 exited with status 2 2013-11-04 14:58:54 INFO [cinder.service] _wait_child 1 2013-11-04 14:58:54 INFO [cinder.service] wait wrap.failed True These logs appeared when I tried to use the already built volume. I am not sure what is the root cause of it. I am using tgtd for iscsi. Another thing I want to highlight here is, I get the following errors while launching the tgtd: (null): iscsi_tcp_init_portal(227) unable to bind server socket, Address already in use (null): iscsi_tcp_init_portal(227) unable to bind server socket, Address already in use (null): iscsi_add_portal(275) failed to create/bind to portal (null):3260 librdmacm: couldn't read ABI version. librdmacm: assuming: 4 CMA: unable to get RDMA device list (null): iser_ib_init(3263) Failed to initialize RDMA; load kernel modules? (null): fcoe_init(214) (null) (null): fcoe_create_interface(171) no interface specified. I am kind of stuck here, and don't know hoe to proceed. And regarding keystone exiting (service keystone start), I don't see any error message in the keystone log file but still it exits and runs well when I launch it from the shell. Thanks, -Rajeev Thanks, -Rajeev On Thu, Oct 31, 2013 at 10:17 PM, Dheerendra < dheerendra.madhusudh...@gmail.com> wrote: > Hi Rajeev > > Can you look at /var/log/nova/nova-api.log and nova-scheduler.log ? Is it > multi node installation ? Better you can put nova component in debug log > mode by modifying the nova.conf file. It is too much verbose log but it > helps. > > How did you run keystone ? service keystone restart ? What is error log > in /var/log/keystone/keystone.log ? > > -Dheerendra > > > On Thu, Oct 31, 2013 at 5:21 PM, Rajeev Bansal <connectraj...@gmail.com>wrote: > >> Hi All, >> >> I created an instance from open stack (Grizzly release on Ubuntu) but >> its status shows error. I am not sure how to resolve it. When I looked into >> the keystone,cinder,quantum logs they don't show any error logs. Can >> someone suggest how can I debug it and solve the problem. >> >> Another problem I noticed is keystone service always fails to run but >> when I launch it manually >> /usr/bin/python /usr/bin/keystone-all it runs well. >> >> >> Thanks, >> -Rajeev >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack@lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack