Hi,

  I am trying to back port the HDP scaling implementation to the 0.2 branch and 
have run into a number of differences.  At this point I am trying to figure out 
whether what I am observing is intended or symptoms of a bug.

  For a case in which I am adding one instance to an existing node group as 
well as an additional node group with one instance I am seeing the following 
arguments being passed to the scale_cluster method of the plugin:

- A cluster object that contains the following set of node groups:

[<savanna.db.models.NodeGroup[object at 10d8bdd90] 
{created=datetime.datetime(2013, 8, 28, 21, 50, 5, 208003), 
updated=datetime.datetime(2013, 8, 28, 21, 50, 5, 208007), 
id=u'd6fadb7a-367b-41ed-989c-af40af2d3e3d', name=u'master', flavor_id=u'3', 
image_id=None, node_processes=[u'NAMENODE', u'SECONDARY_NAMENODE', 
u'GANGLIA_SERVER', u'GANGLIA_MONITOR', u'AMBARI_SERVER', u'AMBARI_AGENT', 
u'JOBTRACKER', u'NAGIOS_SERVER'], node_configs={}, volumes_per_node=0, 
volumes_size=10, volume_mount_prefix=u'/volumes/disk', count=1, 
cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'15344a5c-5e83-496a-9648-d7b58f40ad1f'}>, 
<savanna.db.models.NodeGroup[object at 10d8bd950] 
{created=datetime.datetime(2013, 8, 28, 21, 50, 5, 210962), 
updated=datetime.datetime(2013, 8, 28, 22, 5, 1, 728402), 
id=u'672e5597-2a8d-4470-8f5d-8cc43c7bb28e', name=u'slave', flavor_id=u'3', 
image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', u'GANGLIA_MONITOR', 
u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], node_configs={}, 
volumes_per_node=0, volumes_size=10, volume_mount_prefix=u'/volumes/disk', 
count=2, cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'5dd6aa5a-496c-4dda-b94c-3b3752eb0efb'}>, 
<savanna.db.models.NodeGroup[object at 10d897f90] 
{created=datetime.datetime(2013, 8, 28, 22, 4, 59, 871379), 
updated=datetime.datetime(2013, 8, 28, 22, 4, 59, 871388), 
id=u'880e1b17-f4e4-456d-8421-31bf8ef1fb65', name=u'slave2', flavor_id=u'1', 
image_id=None, node_processes=[u'DATANODE', u'HDFS_CLIENT', u'GANGLIA_MONITOR', 
u'AMBARI_AGENT', u'TASKTRACKER', u'MAPREDUCE_CLIENT'], node_configs={}, 
volumes_per_node=0, volumes_size=10, volume_mount_prefix=u'/volumes/disk', 
count=1, cluster_id=u'e086d444-2a0f-4105-8ef2-51c56cdb70d2', 
node_group_template_id=u'd67da924-792b-4558-a5cb-cb97bba4107f'}>]
 
  So it appears that the cluster is already configured with the three node 
groups (two original, one new) and the associated counts.

- The list of instances.  However, whereas the master branch was passing me two 
instances (one instance representing the addition to the existing group, one 
representing the new instance associated with the added node group), in the 0.2 
branch I am only seeing one instance being passed (the one instance being added 
to the existing node group):

(Pdb) p instances
[<savanna.db.models.Instance[object at 10d8bf050] 
{created=datetime.datetime(2013, 8, 28, 22, 5, 1, 725343), 
updated=datetime.datetime(2013, 8, 28, 22, 5, 47, 286665), extra=None, 
node_group_id=u'672e5597-2a8d-4470-8f5d-8cc43c7bb28e', 
instance_id=u'377694a2-a589-479b-860f-f1541d249624', 
instance_name=u'scale-slave-002', internal_ip=u'192.168.32.4', 
management_ip=u'172.18.3.5', volumes=[]}>]
(Pdb) p len(instances)
1

  I am not certain why I am not getting a listing of instances representing the 
instances being added to the cluster as I do in the master branch.  Is this 
intended?  How do I obtain the instance reference for the instance being added 
to the new node group?

-- Jon
-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to