Re: [Crowbar] Uploading barclamps

2013-08-02 Thread Rob_Hirschfeld
Did you look in \opt\dell\crowbar_framework\log ?


From: crowbar-bounces On Behalf Of Friday, Stephen
Sent: Monday, July 29, 2013 10:21 AM
To: crowbar
Subject: [Crowbar] Uploading barclamps

I have uploaded a barclamp into crowbar and it has appeared, I then click on 
the barclamp and type in a name for the proposal, the system then errors with

Failed to load proposal template for barclamp


-  Where is the log file so I can troubleshoot this issue?

Regards
Steve Friday
Solutions Architect Director - Cloud Engineering CoE
Dell | Services Product Group
Mobile +1 (512) 815 0476
[cid:image001.jpg@01CE8F76.0FA88BA0]

<>___
Crowbar mailing list
Crowbar@dell.com
https://lists.us.dell.com/mailman/listinfo/crowbar
For more information: http://crowbar.github.com/

Re: [Crowbar] Using Dell Equallogic with Openstack Cinder

2013-08-02 Thread Jon Bayless
A follow up on this. I was able to find a solution, though I don't think 
it is really the 'right' solution.

In the cinder/volume/driver.py file I made a change as follows:

def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs):
 check_exit_code = kwargs.pop('check_exit_code', 0)
 (out, err) = self._execute('iscsiadm', '-m', 'node', '-T',
iscsi_properties['target_iqn'],
'-p', iscsi_properties['target_portal'],
*iscsi_command, run_as_root=True,
check_exit_code=check_exit_code)
 LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
   (iscsi_command, out, err))
 return (out, err)

Changed to:

def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs):
 check_exit_code = kwargs.pop('check_exit_code', 0)
 (out, err) = utils.execute('iscsiadm', '-m', 'node', '-T',
iscsi_properties['target_iqn'],
'-p', iscsi_properties['target_portal'],
*iscsi_command, run_as_root=True,
check_exit_code=check_exit_code)
 LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
   (iscsi_command, out, err))
 return (out, err)

This allows the Cinder service to properly use SSH to the EQL box when 
needed (most of the time for 'administrative' work) and to run commands 
for iscsi connections on itself or compute nodes etc. when needed.

I'm pretty sure this is not a great idea since that code is subject to a 
lot more fiddling and change with future releases but the eqlx driver 
file seems to have a lot more in it that would require change.

Hopefully that is helpful.

Thanks



On 8/1/2013 3:46 PM, Jon Bayless wrote:
> Hello. Hoping to find some help with an issue in my attempt to use an
> Equallogic array for a storage back end for Cinder with OpenStack Grizzly.
>
> Since the Crowbar Grizzly release is not complete yet, we are trying to
> use the Rackspace Private Cloud installer tool and so far it has worked
> very well. Using it with a Ceph storage back end worked great but we
> want to see how things work with Equallogic.
>
> We have a fresh Equallogic running 6.0.2 release firmware and have
> installed the eqlx.py into the appropriate driver location for the
> Cinder setup. The config is all in place and we can start the Cinder
> volume service just fine. Creating and deleting volumes works as well.
>
> The problem comes when we try to actually use the Equallogic to actually
> have the controller node or compute node login to an iSCSI volume to put
> data on it (such as making a volume from an image). The log result is as
> follows:
>
> 2013-08-01 13:46:20ERROR [cinder.volume.manager] Error: ['Traceback
> (most recent call last):
> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
> line 250, in create_volume
>   image_location)
> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
> line 189, in _create_volume
>   image_id)
> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
> line 602, in _copy_image_to_volume
>   image_id)
> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
> line 362, in copy_image_to_volume
>   iscsi_properties, volume_path = self._attach_volume(
> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
> line 398, in _attach_volume
>   try:
> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
> line 304, in _run_iscsiadm
>   *iscsi_command, run_as_root=True,
> ', '  File
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/eqlx.py", line
> 221, in _execute
>   return self._run_ssh(command, timeout=FLAGS.eqlx_cli_timeout)
> ', '  File
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/eqlx.py", line
> 73, in __inner
>   res = gt.wait()
> ', '  File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py",
> line 168, in wait
>   return self._exit_event.wait()
> ', '  File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line
> 116, in wait
>   return hubs.get_hub().switch()
> ', '  File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line
> 187, in switch
>   return self.greenlet.switch()
> ', '  File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py",
> line 194, in main
>   result = function(*args, **kwargs)
> ', '  File
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/eqlx.py", line
> 253, in _run_ssh
>   raise exception.Error(msg, out)
> ', "Error: (u'Error executing EQL command: stty columns 255', ['iscsiadm
> -m node -T
> iqn.2001-05.com.equallogic:0-8a0906-b0976c809-216005851fab-volume-25fea6db-dd20-4f8a-9099-f1d96c122b3d
> -p 10.64.0.5:3260', ' ^', 'Error: Bad command',

Re: [Crowbar] Using Dell Equallogic with Openstack Cinder

2013-08-02 Thread Simon_Jakesch
Jon,

that's pretty impressive for a non-developer. Although as you've stated, I'd be 
careful with making changes to any of the Cinder code base. I actually looked 
at the problem you had, and I think you might be well served to take a close 
look at what command is being execute exactly.
>From looking at eqlx.py it appears that the command doesn't necessarily run 
>inside the EQL command prompt. Instead the "Error executing EQL command" comes 
>from the static part of the string. If you look at the whole _run_ssh method 
>(line 228) you'll see that there is a bunch of debug output that should be 
>generated somewhere. If you can seem to find that, I'd try to turn that on 
>somehow, or simply try to dump it to a file temporarily.

Hope this helps, let me know if you need more pointers.

Simon

-Original Message-
From: crowbar-bounces On Behalf Of Jon Bayless
Sent: Friday, August 02, 2013 1:29 PM
To: crowbar
Subject: Re: [Crowbar] Using Dell Equallogic with Openstack Cinder

A follow up on this. I was able to find a solution, though I don't think it is 
really the 'right' solution.

In the cinder/volume/driver.py file I made a change as follows:

def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs):
 check_exit_code = kwargs.pop('check_exit_code', 0)
 (out, err) = self._execute('iscsiadm', '-m', 'node', '-T',
iscsi_properties['target_iqn'],
'-p', iscsi_properties['target_portal'],
*iscsi_command, run_as_root=True,
check_exit_code=check_exit_code)
 LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
   (iscsi_command, out, err))
 return (out, err)

Changed to:

def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs):
 check_exit_code = kwargs.pop('check_exit_code', 0)
 (out, err) = utils.execute('iscsiadm', '-m', 'node', '-T',
iscsi_properties['target_iqn'],
'-p', iscsi_properties['target_portal'],
*iscsi_command, run_as_root=True,
check_exit_code=check_exit_code)
 LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
   (iscsi_command, out, err))
 return (out, err)

This allows the Cinder service to properly use SSH to the EQL box when needed 
(most of the time for 'administrative' work) and to run commands for iscsi 
connections on itself or compute nodes etc. when needed.

I'm pretty sure this is not a great idea since that code is subject to a lot 
more fiddling and change with future releases but the eqlx driver file seems to 
have a lot more in it that would require change.

Hopefully that is helpful.

Thanks



On 8/1/2013 3:46 PM, Jon Bayless wrote:
> Hello. Hoping to find some help with an issue in my attempt to use an 
> Equallogic array for a storage back end for Cinder with OpenStack Grizzly.
>
> Since the Crowbar Grizzly release is not complete yet, we are trying 
> to use the Rackspace Private Cloud installer tool and so far it has 
> worked very well. Using it with a Ceph storage back end worked great 
> but we want to see how things work with Equallogic.
>
> We have a fresh Equallogic running 6.0.2 release firmware and have 
> installed the eqlx.py into the appropriate driver location for the 
> Cinder setup. The config is all in place and we can start the Cinder 
> volume service just fine. Creating and deleting volumes works as well.
>
> The problem comes when we try to actually use the Equallogic to 
> actually have the controller node or compute node login to an iSCSI 
> volume to put data on it (such as making a volume from an image). The 
> log result is as
> follows:
>
> 2013-08-01 13:46:20ERROR [cinder.volume.manager] Error: ['Traceback
> (most recent call last):
> ', '  File 
> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
> line 250, in create_volume
>   image_location)
> ', '  File 
> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
> line 189, in _create_volume
>   image_id)
> ', '  File 
> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
> line 602, in _copy_image_to_volume
>   image_id)
> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
> line 362, in copy_image_to_volume
>   iscsi_properties, volume_path = self._attach_volume( ', '  File 
> "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
> line 398, in _attach_volume
>   try:
> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
> line 304, in _run_iscsiadm
>   *iscsi_command, run_as_root=True, ', '  File 
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/eqlx.py", line 
> 221, in _execute
>   return self._run_ssh(command, timeout=FLAGS.eqlx_cli_timeout) ', 
> '  File 
> "/usr/lib/python2.7/dist-pack

Re: [Crowbar] Using Dell Equallogic with Openstack Cinder

2013-08-02 Thread Jon Bayless

Thanks.

I'll see if I can find anything using that debug info.


On 8/2/2013 12:43 PM, simon_jake...@dell.com wrote:
> Jon,
>
> that's pretty impressive for a non-developer. Although as you've stated, I'd 
> be careful with making changes to any of the Cinder code base. I actually 
> looked at the problem you had, and I think you might be well served to take a 
> close look at what command is being execute exactly.
>>From looking at eqlx.py it appears that the command doesn't necessarily run 
>>inside the EQL command prompt. Instead the "Error executing EQL command" 
>>comes from the static part of the string. If you look at the whole _run_ssh 
>>method (line 228) you'll see that there is a bunch of debug output that 
>>should be generated somewhere. If you can seem to find that, I'd try to turn 
>>that on somehow, or simply try to dump it to a file temporarily.
>
> Hope this helps, let me know if you need more pointers.
>
> Simon
>
> -Original Message-
> From: crowbar-bounces On Behalf Of Jon Bayless
> Sent: Friday, August 02, 2013 1:29 PM
> To: crowbar
> Subject: Re: [Crowbar] Using Dell Equallogic with Openstack Cinder
>
> A follow up on this. I was able to find a solution, though I don't think it 
> is really the 'right' solution.
>
> In the cinder/volume/driver.py file I made a change as follows:
>
> def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs):
>   check_exit_code = kwargs.pop('check_exit_code', 0)
>   (out, err) = self._execute('iscsiadm', '-m', 'node', '-T',
>  iscsi_properties['target_iqn'],
>  '-p', iscsi_properties['target_portal'],
>  *iscsi_command, run_as_root=True,
>  check_exit_code=check_exit_code)
>   LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
> (iscsi_command, out, err))
>   return (out, err)
>
> Changed to:
>
> def _run_iscsiadm(self, iscsi_properties, iscsi_command, **kwargs):
>   check_exit_code = kwargs.pop('check_exit_code', 0)
>   (out, err) = utils.execute('iscsiadm', '-m', 'node', '-T',
>  iscsi_properties['target_iqn'],
>  '-p', iscsi_properties['target_portal'],
>  *iscsi_command, run_as_root=True,
>  check_exit_code=check_exit_code)
>   LOG.debug("iscsiadm %s: stdout=%s stderr=%s" %
> (iscsi_command, out, err))
>   return (out, err)
>
> This allows the Cinder service to properly use SSH to the EQL box when needed 
> (most of the time for 'administrative' work) and to run commands for iscsi 
> connections on itself or compute nodes etc. when needed.
>
> I'm pretty sure this is not a great idea since that code is subject to a lot 
> more fiddling and change with future releases but the eqlx driver file seems 
> to have a lot more in it that would require change.
>
> Hopefully that is helpful.
>
> Thanks
>
>
>
> On 8/1/2013 3:46 PM, Jon Bayless wrote:
>> Hello. Hoping to find some help with an issue in my attempt to use an
>> Equallogic array for a storage back end for Cinder with OpenStack Grizzly.
>>
>> Since the Crowbar Grizzly release is not complete yet, we are trying
>> to use the Rackspace Private Cloud installer tool and so far it has
>> worked very well. Using it with a Ceph storage back end worked great
>> but we want to see how things work with Equallogic.
>>
>> We have a fresh Equallogic running 6.0.2 release firmware and have
>> installed the eqlx.py into the appropriate driver location for the
>> Cinder setup. The config is all in place and we can start the Cinder
>> volume service just fine. Creating and deleting volumes works as well.
>>
>> The problem comes when we try to actually use the Equallogic to
>> actually have the controller node or compute node login to an iSCSI
>> volume to put data on it (such as making a volume from an image). The
>> log result is as
>> follows:
>>
>> 2013-08-01 13:46:20ERROR [cinder.volume.manager] Error: ['Traceback
>> (most recent call last):
>> ', '  File
>> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
>> line 250, in create_volume
>>image_location)
>> ', '  File
>> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
>> line 189, in _create_volume
>>image_id)
>> ', '  File
>> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
>> line 602, in _copy_image_to_volume
>>image_id)
>> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
>> line 362, in copy_image_to_volume
>>iscsi_properties, volume_path = self._attach_volume( ', '  File
>> "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
>> line 398, in _attach_volume
>>try:
>> ', '  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
>> line 304,

[Crowbar] Crowbar plan for upgrading between OpenStack versions

2013-08-02 Thread Jonathan Brownell
I'm using Crowbar to manage an existing cloud with a handful of Nova
Essex compute instances, and I'm interested in pulling/updating to
create a new barclamp-nova with the latest Havana .deb packages to
light up a new Nova Havana cluster alongside what I've already got.
(My eventual goal is to migrate the existing workloads over to Nova
Havana and release my original compute nodes for other purposes.)

Is there any way for me to have two "nova" barclamps installed
side-by-side in Crowbar for this purpose? It seems quite non-trivial
to make a new barclamp with a new name (i.e. "nova-havana"), since the
barclamp name is woven all throughout the cookbook and would create a
lot of conflicts unless I did a perfect job.

Is there a Crowbar strategy that would allow me to perform this kind
of step-up upgrade from one OpenStack Nova version to the next in the
same environment?

Thanks,

-JB

___
Crowbar mailing list
Crowbar@dell.com
https://lists.us.dell.com/mailman/listinfo/crowbar
For more information: http://crowbar.github.com/


[Crowbar] Changing log_facility levels ?

2013-08-02 Thread Shane Gibson

Okay – new to the crowbar list, so please be delicate if I'm off topic, etc…  ☺ 
 

I poked around for a while (google and crowbar documentation) trying to 
determine “the correct way” to change the Log Facility for our Crowbar deployed 
swift instance.  I find the various template files with the “log_facility” 
specification – and it’s set to what /etc/swift/*.conf is defined to on the 
deployed nodes (eg “LOG_LOCAL0” and “INFO” level).  

However (oh, by the way, completely new to Chef too …) – I’m not certain where 
the “correct” place to change the log_facility to enact a logging increase 
level.  I see on the admin node in my /opt/dell directory structure:

  
./chef/cookbooks/swift/templates/default/container-server-conf.erb:log_facility 
= LOG_LOCAL0
  ./chef/cookbooks/swift/templates/default/account-server-conf.erb:log_facility 
= LOG_LOCAL0
  ./chef/cookbooks/swift/templates/default/proxy-server.conf.erb:log_facility = 
LOG_LOCAL0
  ./chef/cookbooks/swift/templates/default/object-server-conf.erb:log_facility 
= LOG_LOCAL0
  
./barclamps/swift/chef/cookbooks/swift/templates/default/container-server-conf.erb:log_facility
 = LOG_LOCAL0
  
./barclamps/swift/chef/cookbooks/swift/templates/default/account-server-conf.erb:log_facility
 = LOG_LOCAL0
  
./barclamps/swift/chef/cookbooks/swift/templates/default/proxy-server.conf.erb:log_facility
 = LOG_LOCAL0
  
./barclamps/swift/chef/cookbooks/swift/templates/default/object-server-conf.erb:log_facility
 = LOG_LOCAL0

I've been through most of the Dell released documentation on Crowbar, but I'm 
not finding anything related to changing logging levels, and "making them 
stick".  I'm assuming that in the Crowbar/Chef implementation, the 
/etc/swift/*conf files get whacked by Chef coming along and "making things 
right" - like Puppet would?

Is there some other more elegant way to change logging levels and implement 
them?  

TIA! 
~~Shane


___
Crowbar mailing list
Crowbar@dell.com
https://lists.us.dell.com/mailman/listinfo/crowbar
For more information: http://crowbar.github.com/

Re: [Crowbar] Changing log_facility levels ?

2013-08-02 Thread Judd Maltin
Seems like the log facility and levels is hard coded.  You can experiment
with a little Chef and parameterize the templates, and then feed the
parameters from an attributes file.

Judd Maltin
1-917-882-1270
I have suffering to learn compassion once and once again.
On Aug 2, 2013 6:55 PM, "Shane Gibson"  wrote:

>
> Okay – new to the crowbar list, so please be delicate if I'm off topic,
> etc…  ☺
>
> I poked around for a while (google and crowbar documentation) trying to
> determine “the correct way” to change the Log Facility for our Crowbar
> deployed swift instance.  I find the various template files with the
> “log_facility” specification – and it’s set to what /etc/swift/*.conf is
> defined to on the deployed nodes (eg “LOG_LOCAL0” and “INFO” level).
>
> However (oh, by the way, completely new to Chef too …) – I’m not certain
> where the “correct” place to change the log_facility to enact a logging
> increase level.  I see on the admin node in my /opt/dell directory
> structure:
>
>
> ./chef/cookbooks/swift/templates/default/container-server-conf.erb:log_facility
> = LOG_LOCAL0
>
> ./chef/cookbooks/swift/templates/default/account-server-conf.erb:log_facility
> = LOG_LOCAL0
>
> ./chef/cookbooks/swift/templates/default/proxy-server.conf.erb:log_facility
> = LOG_LOCAL0
>
> ./chef/cookbooks/swift/templates/default/object-server-conf.erb:log_facility
> = LOG_LOCAL0
>
> ./barclamps/swift/chef/cookbooks/swift/templates/default/container-server-conf.erb:log_facility
> = LOG_LOCAL0
>
> ./barclamps/swift/chef/cookbooks/swift/templates/default/account-server-conf.erb:log_facility
> = LOG_LOCAL0
>
> ./barclamps/swift/chef/cookbooks/swift/templates/default/proxy-server.conf.erb:log_facility
> = LOG_LOCAL0
>
> ./barclamps/swift/chef/cookbooks/swift/templates/default/object-server-conf.erb:log_facility
> = LOG_LOCAL0
>
> I've been through most of the Dell released documentation on Crowbar, but
> I'm not finding anything related to changing logging levels, and "making
> them stick".  I'm assuming that in the Crowbar/Chef implementation, the
> /etc/swift/*conf files get whacked by Chef coming along and "making things
> right" - like Puppet would?
>
> Is there some other more elegant way to change logging levels and
> implement them?
>
> TIA!
> ~~Shane
>
>
> ___
> Crowbar mailing list
> Crowbar@dell.com
> https://lists.us.dell.com/mailman/listinfo/crowbar
> For more information: http://crowbar.github.com/
___
Crowbar mailing list
Crowbar@dell.com
https://lists.us.dell.com/mailman/listinfo/crowbar
For more information: http://crowbar.github.com/