[Yahoo-eng-team] [Bug 1236306] [NEW] boto is not supported in py33 env

2013-10-07 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

boto is a big obstacle in py33 env.  It is not supported by py33.


Downloading/unpacking boto>=2.4.0,!=2.13.0 (from -r 
/opt/stack/nova/requirements.txt (line 6))
  Running setup.py egg_info for package boto
Traceback (most recent call last):
  File "", line 16, in 
  File "/opt/stack/nova/.tox/py33/build/boto/setup.py", line 37, in 
from boto import __version__
  File "./boto/__init__.py", line 27, in 
from boto.pyami.config import Config, BotoConfigLocations
  File "./boto/pyami/config.py", line 185
print s.getvalue()
  ^
SyntaxError: invalid syntax
Complete output from command python setup.py egg_info:
Traceback (most recent call last):

  File "", line 16, in 

  File "/opt/stack/nova/.tox/py33/build/boto/setup.py", line 37, in


from boto import __version__

  File "./boto/__init__.py", line 27, in 

from boto.pyami.config import Config, BotoConfigLocations

  File "./boto/pyami/config.py", line 185

print s.getvalue()

** Affects: nova
 Importance: Undecided
 Assignee: Kui Shi (skuicloud)
 Status: New

-- 
boto is not supported in py33 env
https://bugs.launchpad.net/bugs/1236306
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235841] Re: Keystone does not start

2013-10-07 Thread Dolph Mathews
Thanks for the follow up!

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1235841

Title:
  Keystone does not start

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  With the latest devstack setup, Keystone is failing to start when run
  stack.sh or try running keystone with keystone-all. It give's the
  following AttributeError:

  stack.sh
  ...
  .
  2013-10-05 21:05:12 Starting Keystone
  2013-10-05 21:05:13 + mysql -uroot -popenstack1 -h127.0.0.1 -e 'CREATE 
DATABASE keystone CHARACTER SET utf8;'
  2013-10-05 21:05:13 + /opt/stack/keystone/bin/keystone-manage db_sync
  2013-10-05 21:05:13 Traceback (most recent call last):
  2013-10-05 21:05:13   File "/opt/stack/keystone/bin/keystone-manage", line 
37, in 
  2013-10-05 21:05:13 from keystone import cli
  2013-10-05 21:05:13   File "/opt/stack/keystone/keystone/cli.py", line 27, in 

  2013-10-05 21:05:13 from keystone.common.sql import migration
  2013-10-05 21:05:13   File 
"/opt/stack/keystone/keystone/common/sql/__init__.py", line 18, in 
  2013-10-05 21:05:13 from keystone.common.sql.core import *
  2013-10-05 21:05:13   File "/opt/stack/keystone/keystone/common/sql/core.py", 
line 31, in 
  2013-10-05 21:05:13 from keystone.openstack.common.db.sqlalchemy import 
models
  2013-10-05 21:05:13   File 
"/opt/stack/keystone/keystone/openstack/common/db/sqlalchemy/models.py", line 
31, in 
  2013-10-05 21:05:13 from keystone.openstack.common.db.sqlalchemy import 
session as sa
  2013-10-05 21:05:13   File 
"/opt/stack/keystone/keystone/openstack/common/db/sqlalchemy/session.py", line 
279, in 
  2013-10-05 21:05:13 deprecated_opts=[cfg.DeprecatedOpt('sql_connection',
  2013-10-05 21:05:13 AttributeError: 'module' object has no attribute 
'DeprecatedOpt'..
  
  2013-10-05 21:06:15 ./stack.sh:851:start_keystone
  2013-10-05 21:06:15 /home/openstack/devstack/lib/keystone:376:die
  2013-10-05 21:06:15 [ERROR] /home/openstack/devstack/lib/keystone:376 
keystone did not start

  openstack@ubuntu:/opt/stack/keystone/bin$ ./keystone-all
  2013-10-05 21:20:20.984 38608 INFO keystone.common.environment [-] 
Environment configured as: eventlet
  2013-10-05 21:20:21.119 38608 CRITICAL keystone [-] Class Identity cannot be 
found (['Traceback (most recent call last):\n', '  File 
"/opt/stack/keystone/keystone/openstack/common/importutils.py", line 30, in 
import_class\n__import__(mod_str)\n', '  File 
"/opt/stack/keystone/keystone/identity/backends/sql.py", line 18, in \n 
   from keystone.common import sql\n', '  File 
"/opt/stack/keystone/keystone/common/sql/__init__.py", line 18, in \n   
 from keystone.common.sql.core import *\n', '  File 
"/opt/stack/keystone/keystone/common/sql/core.py", line 31, in \n
from keystone.openstack.common.db.sqlalchemy import models\n', '  File 
"/opt/stack/keystone/keystone/openstack/common/db/sqlalchemy/models.py", line 
31, in \nfrom keystone.openstack.common.db.sqlalchemy import 
session as sa\n', '  File 
"/opt/stack/keystone/keystone/openstack/common/db/sqlalchemy/session.py", line 
279, in \ndeprecated_opts=[cfg.DeprecatedOpt(\'sql_connection\
 ',\n', "AttributeError: 'module' object has no attribute 'DeprecatedOpt'\n"])

  Not sure if it's really a bug or something with my env. but thought
  good idea to report considering the release as I work on figure out..

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1235841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1228228] Re: ubuntu is not added to sudo group

2013-10-07 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.3~bzr884-0ubuntu1

---
cloud-init (0.7.3~bzr884-0ubuntu1) saucy; urgency=low

  * New upstream snapshot.
* allow disabling of growpart via file /etc/growroot-disabled
  (LP: #1234331)
* add default user to sudo group (LP: #1228228)
* fix disk creation on azure (LP: #1233698)
* DatasourceSmartOS: allow availabiltity-zone to be fed from the
  datasource via 'region' (which allows 'mirrors' and other things
  to make use of it).
 -- Scott MoserFri, 04 Oct 2013 21:08:07 -0400

** Changed in: cloud-init (Ubuntu Saucy)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1228228

Title:
  ubuntu is not added to sudo group

Status in Init scripts for use on cloud images:
  Fix Committed
Status in ubuntu virtualization tools:
  New
Status in “cloud-init” package in Ubuntu:
  Fix Released
Status in “cloud-init” source package in Saucy:
  Fix Released

Bug description:
  On Precise, the "ubuntu" user in cloud images is automatically added
  to the libvirtd group when libvirtd-bin is installed. This seems to
  have stopped in Saucy, and needs investigation for the juju use case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1228228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234331] Re: cloud-init should respect /etc/growroot-disabled

2013-10-07 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.3~bzr884-0ubuntu1

---
cloud-init (0.7.3~bzr884-0ubuntu1) saucy; urgency=low

  * New upstream snapshot.
* allow disabling of growpart via file /etc/growroot-disabled
  (LP: #1234331)
* add default user to sudo group (LP: #1228228)
* fix disk creation on azure (LP: #1233698)
* DatasourceSmartOS: allow availabiltity-zone to be fed from the
  datasource via 'region' (which allows 'mirrors' and other things
  to make use of it).
 -- Scott MoserFri, 04 Oct 2013 21:08:07 -0400

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1234331

Title:
  cloud-init should respect /etc/growroot-disabled

Status in Init scripts for use on cloud images:
  Fix Committed
Status in “cloud-init” package in Ubuntu:
  Fix Released

Bug description:
  cloud-init should respect /etc/growroot-disabled so a user has one
  file to touch in order to disable the expansion of the partition if
  they so wish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1234331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1053931] Re: Volume hangs in "creating" status even though scheduler raises "No valid host" exception

2013-10-07 Thread Dafna Ron
This bug is happening again in Havana release. 
we had a power outage and I sent a create command while the storage was 
unavailable and also had some other volume related commands running.

2013-10-07 18:41:06.136 8288 WARNING cinder.scheduler.host_manager 
[req-462656bc-4bb2-478a-8fa4-90ac89e1c39e c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:41:06.152 8288 ERROR cinder.volume.flows.create_volume 
[req-462656bc-4bb2-478a-8fa4-90ac89e1c39e c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:44:31.280 8288 WARNING cinder.scheduler.host_manager 
[req-65c2f4e1-71da-4340-b9f0-afdd05ccdaa9 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:44:31.281 8288 ERROR cinder.volume.flows.create_volume 
[req-65c2f4e1-71da-4340-b9f0-afdd05ccdaa9 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:44:50.730 8288 WARNING cinder.scheduler.host_manager 
[req-1c132eb5-ca74-4ab5-91dc-73c25b305165 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:44:50.731 8288 ERROR cinder.volume.flows.create_volume 
[req-1c132eb5-ca74-4ab5-91dc-73c25b305165 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:47:01.577 8288 WARNING cinder.scheduler.host_manager 
[req-538ad552-0e19-4307-bea8-10e0a35a8a36 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:47:01.578 8288 ERROR cinder.volume.flows.create_volume 
[req-538ad552-0e19-4307-bea8-10e0a35a8a36 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:47:18.421 8288 WARNING cinder.scheduler.host_manager 
[req-3a788eb8-56f5-45f6-b4fd-ade01a05cf9d c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:47:18.422 8288 ERROR cinder.volume.flows.create_volume 
[req-3a788eb8-56f5-45f6-b4fd-ade01a05cf9d c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:48:27.732 8288 WARNING cinder.scheduler.host_manager 
[req-1ef1b47a-27b8-4667-9823-2f91dcc0f29e c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:48:27.733 8288 ERROR cinder.volume.flows.create_volume 
[req-1ef1b47a-27b8-4667-9823-2f91dcc0f29e c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:48:51.125 8288 WARNING cinder.scheduler.host_manager 
[req-7a45f5ed-c6b2-4b9a-9e5f-c90d60b1bba8 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:48:51.126 8288 ERROR cinder.volume.flows.create_volume 
[req-7a45f5ed-c6b2-4b9a-9e5f-c90d60b1bba8 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 18:49:54.705 8288 WARNING cinder.scheduler.host_manager 
[req-5fadfa9b-6d82-4ea4-ac36-15de15aa236a c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 18:49:54.706 8288 ERROR cinder.volume.flows.create_volume 
[req-5fadfa9b-6d82-4ea4-ac36-15de15aa236a c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 19:31:10.716 8288 CRITICAL cinder [-] need more than 0 values to 
unpack
2013-10-07 19:37:27.334 2542 WARNING cinder.scheduler.host_manager 
[req-53603c25-424e-4c05-9eee-de5ae15fb300 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] volume service is down or disabled. (host: 
cougar06.scl.lab.tlv.redhat.com)
2013-10-07 19:37:27.350 2542 ERROR cinder.volume.flows.create_volume 
[req-53603c25-424e-4c05-9eee-de5ae15fb300 c02995f25ba44cfab1a3cbd419f045a1 
c77235c29fd0431a8e6628ef6d18e07f] Failed to schedule_create_volume: No valid 
host was found. 
2013-10-07 20:02:18.403 2542 CRITICAL cinder [-] need more than 0 values to 
unpack

[root@cougar06 ~(keystone_admin)]# cinder list 
+--+---+--+--+-+--+-+
|

[Yahoo-eng-team] [Bug 1236037] Re: Nova uselessly restrict the version of six

2013-10-07 Thread Russell Bryant
six is unbounded in the latest requirements.txt


commit 3f908c9c6073a7e326e0ec96fedc410ad8191f83
Author: OpenStack Jenkins 
Date:   Tue Oct 1 16:13:56 2013 +

Updated from global requirements

Change-Id: Ie9e92b06eb28fa238c7cb923a65345c415dd3642

diff --git a/requirements.txt b/requirements.txt
index 785c64b..0544aaf 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -25,7 +25,7 @@ python-cinderclient>=1.0.5
 python-neutronclient>=2.3.0,<3
 python-glanceclient>=0.9.0
 python-keystoneclient>=0.3.2
-six<1.4.0
+six
 stevedore>=0.10
 websockify>=0.5.1,<0.6
 oslo.config>=1.2.0


** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1236037

Title:
  Nova uselessly restrict the version of six

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In requirements.txt, I can see:
  six<1.4.0

  However, after removing this restriction, there is no unit test that
  fails. And Debian Sid has python-six 1.4.1-1.

  Please remove this useless restriction, since otherwise, I have to add
  a patch in debian/patches to fix that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1236037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1191159] [NEW] Neutron is requesting too many tokens

2013-10-07 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

ACTUAL BEHAVIOR: I have a 14 compute nodes environment with a separate
compute controller. With system being idle, i see quantum modules (OVS
plugin used here) are requesting to many tokens (approx 2/sec). In a
day, it piles up to 150,000 tokens. This behavior adds chattiness and
slower performance on the entire OpenStack module's keystone
authentication/authorization process. Here is the dump of the count just
for a day's run:

select user_id, count(*) from token group by user_id
"2efad4b253f64b4dae65a28f45438d93";10341 <-- admin user
"a1fa17a31a4246518ab3acbf04ff448a";114769  <--quantum user

EXPECTED BEHAVIOR:  Though the expiration of the tokens are set for 24
hrs, the quantum is requesting new tokens now n then. Either a missing
configuration or code issue that must be causing this.

here is how api-paste.ini looks like under /etc/quantum
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.168.123.12
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass


HOW-TO-REPRODUCE:
Install openstack using 
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
Keep the system idle. Note that the count of token being issue is proportional 
to no. of compute nodes you have.
--attached conf files
--keystone and quantum logs (from compute, controller+network node)

** Affects: nova
 Importance: Medium
 Assignee: Drew Thorstensen (thorst)
 Status: Confirmed


** Tags: havana-rc-potential
-- 
Neutron is requesting too many tokens
https://bugs.launchpad.net/bugs/1191159
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1185609] Re: Swift Store: Exceptions from uploading chunks are raise incorrectly

2013-10-07 Thread Iccha Sethi
I think this bug got addressed here: https://review.openstack.org/#/c/47534/
Please correct if wrong

** Changed in: glance
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1185609

Title:
  Swift Store: Exceptions from uploading chunks are raise incorrectly

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  in glance/store/swift.py

  Exceptions that are raised from "put_object" may get lost if there is
  an exception when cleaning up stale chunks. This is because the
  exc_info is lost in _delete_stale_chunks if there is another exception
  that is caught. so line 382 "raise" is not good enough and we need to
  ensure we reraise the original exception.

  
  315 def _delete_stale_chunks(self, connection, container, chunk_list):
  316 for chunk in chunk_list:
  317 LOG.debug(_("Deleting chunk %s" % chunk))
  318 try:
  319 connection.delete_object(container, chunk)
  320 except Exception:
  321 msg = _("Failed to delete orphaned chunk %s/%s")
  322 LOG.exception(msg, container, chunk)

  ...

  372 try:
  373 chunk_etag = connection.put_object(
  374 location.container, chunk_name, reader,
  375 content_length=content_length)
  376 written_chunks.append(chunk_name)
  377 except Exception:
  378 # Delete orphaned segments from swift backend
  379 self._delete_stale_chunks(connection,
  380   location.container,
  381   written_chunks)
  382 raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1185609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202042] Re: VMware: Unable to spawn a instance when using Quantum and VMware drivers

2013-10-07 Thread Tracy Jones
** Also affects: openstack-vmwareapi-team
   Importance: Undecided
   Status: New

** Changed in: openstack-vmwareapi-team
   Importance: Undecided => High

** Changed in: openstack-vmwareapi-team
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1202042

Title:
  VMware: Unable to spawn a instance when using Quantum and VMware
  drivers

Status in OpenStack Compute (Nova):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  Fix Released

Bug description:
  2013-07-11 11:06:24.375 [01;31mERROR nova.compute.manager 
[[01;36mreq-c5980352-ed1a-4a66-806e-df24d9f7d088 [00;36mdemo demo[01;31m] 
[01;35m[instance: 6c673cab-3171-4178-95a5-dcbbd95d6caa] [01;31mInstance failed 
to spawn[00m
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mTraceback (most recent call last):
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m  File 
"/opt/stack/nova/nova/compute/manager.py", line 1245, in _spawn
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mblock_device_info)
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 182, in spawn
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mblock_device_info)
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 190, in spawn
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m'network_ref': network_ref,
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 181, in _get_vif_infos
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mnetwork_ref = 
vmwarevif.ensure_vlan_bridge(
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/vif.py", line 84, in ensure_vlan_bridge
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mcluster)
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m  File 
"/opt/stack/nova/nova/virt/vmwareapi/network_util.py", line 174, in 
create_port_group
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mraise exception.NovaException(exc)
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mNovaException: Server raised fault: '
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mRequired property name is missing 
from data object of type HostPortGroupSpec
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mwhile parsing serialized DataObject 
of type vim.host.PortGroup.Specification
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mat line 1, column 338
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mwhile parsing call information for 
method AddPortGroup
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mat line 1, column 256
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mwhile parsing SOAP body
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00mat line 1, column 246
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[instance: 
6c673cab-3171-4178-95a5-dcbbd95d6caa] [00m
  [01;31m2013-07-11 11:06:24.375 TRACE nova.compute.manager [01;35m[i

[Yahoo-eng-team] [Bug 1236621] [NEW] Disable H803: git commit title should not end with period

2013-10-07 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I think it's a total waste of reviewing time and gating resources for
patches to be failing because of a period at the end of a commit title.
This makes no difference at all to readability. I propose we disable the
check.

See here for an example:
https://review.openstack.org/#/c/50167

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Disable H803: git commit title should not end with period
https://bugs.launchpad.net/bugs/1236621
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp