[openstack-dev] Update MAINTAINERS.rst for devstack

2014-08-13 Thread Ian Wienand

Hi,

By its nature, devstack tends to attract new contributors proposing a
wide-range of changes to just about every project under the Open Stack
banner.

These contributors often don't yet have the context to find the right
people to help with some +1's.  It's also nice for people approving
the change to see someone with a track record looking at it.

Rather than a sledge-hammer, like adding all project-core people to a
review based on a particular area, devstack would like to have a list
of "maintainers" -- self-nominated people with interest and expertise
in a particular sub-area who volunteer to have themselves seeded into
reviews of interest.

All this really means is that you don't mind being added as a reviewer
to devstack changes involving that area and you are willing to be a
point-of-contact should the need arise.  It is not intended to mean
you *have* to review things, or you are the only person who can
approve changes, or you are on the hook if anything goes wrong.

At first, it would be great if people could cherry-pick [1] and add
themselves.  Feel free to add areas ( sections in the rst) and
names as you see fit. (or just email me and I'll batch add).  As time
goes on, we'll curate the list as necessary.

Thanks,

-i

[1] https://review.openstack.org/#/c/114117/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Centos 7 images for HP Helion

2014-08-26 Thread Ian Wienand

Hi,

I would like to get centos 7 based testing working, but I am stuck
without images being provided in the HP Cloud.  Rackspace has a
(slightly quirky, but workable) image and we have an experimental job
that runs fine.

I am aware that building our own custom images with disk-image-builder
is the way forward for this.  I will certainly be working on this once
the changes have made their way into nodepool and have been deployed.
However, this is a very large change to the way upstream infra works
and I think the existing change is enough to digest without adding new
platforms from day 1.

Also, we generally find a few quirks in differences between rax and hp
platforms (certainly f20 work did), so getting them sorted before we
add in the complexity of d-i-b is a win.

Can someone from hp *please* contact me about this.  If there are
issues with the centos side, I will be able to find people to help

Thanks,

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Centos 7 images for HP Helion

2014-08-27 Thread Ian Wienand

On 08/27/2014 08:31 PM, Karanbir Singh wrote:

If you let me know what the issues with the Rackspace images are, I can
try to reach out and help them work through those.


The main issue was a verison of cloud-init installed via pip that 
conflicted with packaged python.  I put a work-around in devstack, and I 
believe they are fixing it as cloud-init packages are available.



I'm working with the HP Cloud folks and hope to have images up soon,
I'll ping back once they are live. Should be fairly soon,


Thank you.  I will work to get them up in nodepool as soon as they are 
available to bridge the gap until we are building our own images


-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-08-31 Thread Ian Wienand

On 08/29/2014 10:42 PM, Sean Dague wrote:

I'm actually kind of convinced now that none of these approaches are
what we need, and that we should instead have a .bashateignore file in
the root dir for the project instead, which would be regex that would
match files or directories to throw out of the walk.


Dean's idea of reading .gitignore might be good.

I had a quick poke at git dir.c:match_pathspec_item() and sort of came
up with something similar [2] which roughly follows that and then only
matches on files that have a shell-script mimetype; which I feel is
probably sane for a default implementation.

IMO devstack should just generate it's own file-list to pass in for
checking and bashate shouldn't have special guessing code for it

It all feels a bit like a solution looking for a problem.  Making
bashate only work on a passed-in list of files and leaving generating
those files up to the test infrastructure is probably would probably
best the best KISS choice...

-i

[1] https://github.com/git/git/blob/master/dir.c#L216
[2] https://review.openstack.org/#/c/117425/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-09-02 Thread Ian Wienand

On 09/02/2014 10:13 PM, Sean Dague wrote:

One of the things that could make it better is to add file extensions to
all shell files in devstack. This would also solve the issue of gerrit
not syntax highlighting most of the files. If people are up for that,
I'll propose a rename patch to get us there. Then dumping the special
bashate discover bits is simple.


I feel like adding .sh to bash to-be-sourced-only (library) files is
probably a less common idiom.  It's just feeling, I don't think it's
any sort of rule.

So my first preference is for bashate to just punt the whole thing and
only work on a list of files.  We can then discuss how best to match
things in devstack so we don't add files but miss checking them.

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] .bashateignore

2014-09-02 Thread Ian Wienand

On 09/03/2014 11:32 AM, Robert Collins wrote:

if-has-bash-hashbang-and-is-versioned-then-bashate-it?


That misses library files that aren't execed and have no #!

This might be an appropriate rule for test infrastructure to generate a 
list for their particular project, but IMO I don't think we need to 
start building that logic into bashate


-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] memory usage in devstack-gate (the oom-killer strikes again)

2014-09-08 Thread Ian Wienand

On 09/09/2014 08:24 AM, Joe Gordon wrote:

1) Should we explicitly set the number of workers that services use in
devstack? Why have so many workers in a small all-in-one environment? What
is the right balance here?


There is a review out for that [1].

Devstack has a switch for everything, and this one seems more
susceptible to bit-rot than others.  It could be achieved with
local.conf values appropriate to each environment.  My preference is
something like that, but I'm also happy to be out-voted


2) Should we be worried that some OpenStack services such as nova-api,
nova-conductor and cinder-api take up so much memory? Does there memory
usage keep growing over time, does anyone have any numbers to answer this?
Why do these processes take up so much memory?


I'm not aware of anything explicitly tracking this.  It might be
interesting to have certain jobs query rss sizes for processes and
report them into graphite [2].  We could even have a devstack flag :)

-i

[1] https://review.openstack.org/#/c/117517/
[2] http://graphite.openstack.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PostgreSQL jobs slow in the gate

2014-09-17 Thread Ian Wienand

On 09/18/2014 09:49 AM, Clark Boylan wrote:

Recent sampling of test run times shows that our tempest jobs run
against clouds using PostgreSQL are significantly slower than jobs run
against clouds using MySQL.


FYI There is a possibly relevant review out for max_connections limits
[1], although it seems to have some issues with shmem usage

-i

[1] https://review.openstack.org/#/c/121952/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-11-25 Thread Ian Wienand

Hi,

My change [1] to enable a consistent tracing mechanism for the many
scripts diskimage-builder runs during its build seems to have hit a
stalemate.

I hope we can agree that the current situation is not good.  When
trying to develop with diskimage-builder, I find myself constantly
going and fiddling with "set -x" in various scripts, requiring me
re-running things needlessly as I try and trace what's happening.
Conversley some scripts set -x all the time and give output when you
don't want it.

Now nodepool is using d-i-b more, it would be even nicer to have
consistency in the tracing so relevant info is captured in the image
build logs.

The crux of the issue seems to be some disagreement between reviewers
over having a single "trace everything" flag or a more fine-grained
approach, as currently implemented after it was asked for in reviews.

I must be honest, I feel a bit silly calling out essentially a
four-line patch here.  But it's been sitting around for months and
I've rebased it countless times.  Please diskimage-builder +2ers, can
we please decide on *something* and I'll implement it.

-i

[1] https://review.openstack.org/#/c/119023/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-12-01 Thread Ian Wienand

On 12/02/2014 04:25 AM, Ben Nemec wrote:

1) A specific reason SHELLOPTS can't be used.


IMO leave this alone as it changes global behaviour at a low-level and
that is a vector for unintended side-effects.  Some thoughts:

- We don't want tracing output of various well-known scripts that
  might run from /bin.

- SHELLOPTS is read-only, so you have to "set -x; export SHELLOPTS"
  which means to turn it on for children you have to start tracing
  yourself.  It's unintuitive and a bit weird.

- Following from that, "DIB_DEBUG_TRACE=n disk-image-create" is the
  same as "disk-image-create -x" which is consistent.  This can be
  useful for CI wrappers

- pretty sure SHELLOPTS doesn't survive sudo, which might add another
  layer of complication for users

- A known env variable can be usefully overloaded to signal to scripts
  not in bash rather than parsing SHELLOPTS


I'm all for improving in this area, but before we make an intrusive
change with an ongoing cost that won't work with anything not
explicitly enabled for it, I want to make sure it's the right thing
to do.  As yet I'm not convinced.


For "ongoing cost" -- I've rebased this about 15 times and there just
isn't that much change in practice.  In reality everyone copy-pastes
another script to get started, so at least they'll copy-paste
something consistent.  That and dib-lint barfs if they don't.

This makes "disk-image-create -x" do something actually useful by
standardising the inconsistent existing defacto headers in all files.
How is this *worse* than the status quo?

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-12-02 Thread Ian Wienand

On 12/02/2014 03:46 PM, Clint Byrum wrote:

1) Conform all o-r-c scripts to the logging standards we have in
OpenStack, or write new standards for diskimage-builder and conform
them to those standards. Abolish non-conditional xtrace in any script
conforming to the standards.


Honestly in the list of things that need doing in openstack, this must
be near the bottom.

The whole reason I wrote this is because "disk-image-create -x ..."
doesn't do what any reasonable person expects it to; i.e. trace all
the scripts it starts.

Having a way to trace execution of all d-i-b scripts is all that's
needed and gives sufficient detail to debug issues.

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-12-02 Thread Ian Wienand

On 12/03/2014 09:30 AM, Clint Byrum wrote:

I for one find the idea of printing every cp, cat, echo and ls command out
rather frustratingly verbose when scanning logs from a normal run.


I for one find this ongoing discussion over a flag whose own help says
"-x -- turn on tracing" not doing the blindly obvious thing of turning
on tracing and the seeming inability to reach to a conclusion on a
posted review over 3 months a troubling narrative for potential
consumers of diskimage-builder.

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-12-04 Thread Ian Wienand

On 12/04/2014 05:41 AM, Clint Byrum wrote:

What if the patch is reworked to leave the current trace-all-the-time
mode in place, and we iterate on each script to make tracing conditional
as we add proper logging?


I have run [1] over patchset 15 to keep whatever was originally using
-x tracing itself by default.  I did not do this originally because it
seems to me this list of files could be approximated with rand(), but
it should maintain the status quo.

-i

[1] https://gist.github.com/ianw/71bbda9e6acc74ccd0fd

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] localrc for mutli-node setup

2014-12-14 Thread Ian Wienand
On 12/13/2014 07:03 AM, Danny Choi (dannchoi) wrote:
> I would like to use devstack to deploy OpenStack on a multi-node setup,
> i.e. separate Controller, Network and Compute nodes

Did you see [1]?  Contributions to make that better of course welcome.

-i

[1] http://docs.openstack.org/developer/devstack/guides/multinode-lab.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] UserWarning: Unknown distribution option: 'pbr'

2015-01-05 Thread Ian Wienand
On 11/27/2014 12:59 PM, Li Tianqing wrote:
> I write a module to extend openstack. When install by python
> setup.py develop, it always blame this

> /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown
> distribution option: 'pbr'
>warnings.warn(msg)
...
> Processing dependencies for UNKNOWN==0.0.0
> Finished processing dependencies for UNKNOWN==0.0.0

> I do not know why the egg is UNKNOWN, and why the pbr option is
> unknown? i write the name in setup.cfg,

This is because pbr isn't installed.  You probably want to install the
python-pbr package.

I hit this problem today with tox and oslo.config.  tox creates an
sdist of the package on the local system, which because pbr isn't
installed system creates this odd UNKNOWN-0.0.0.zip file.

---
$ /usr/bin/python setup.py sdist --formats=zip
/usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'pbr'
  warnings.warn(msg)
running sdist
running egg_info
writing UNKNOWN.egg-info/PKG-INFO
writing top-level names to UNKNOWN.egg-info/top_level.txt
...
creating UNKNOWN-0.0.0


This isn't a failure, and tox is logging all that to a file, so it's
very not clear that's what has happened.

Then tox tries to install this into the virtualenv *with* pbr, which
explodes in a fairly unhelpful manner:

---
$ tox -e pep8
GLOB sdist-make: /home/iwienand/programs/oslo.config/setup.py
pep8 inst-nodeps: 
/home/iwienand/programs/oslo.config/.tox/dist/UNKNOWN-0.0.0.zip
ERROR: invocation failed, logfile: 
/home/iwienand/programs/oslo.config/.tox/pep8/log/pep8-4.log
ERROR: actionid=pep8
msg=installpkg
Unpacking ./.tox/dist/UNKNOWN-0.0.0.zip
  Running setup.py (path:/tmp/pip-z9jGEr-build/setup.py) egg_info for package 
from file:///home/iwienand/programs/oslo.config/.tox/dist/UNKNOWN-0.0.0.zip
ERROR:root:Error parsing
Traceback (most recent call last):
  File 
"/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/core.py",
 line 104, in pbr
attrs = util.cfg_to_args(path)
  File 
"/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/util.py",
 line 238, in cfg_to_args
pbr.hooks.setup_hook(config)
  File 
"/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/hooks/__init__.py",
 line 27, in setup_hook
metadata_config.run()
  File 
"/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/hooks/base.py",
 line 29, in run
self.hook()
  File 
"/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/hooks/metadata.py",
 line 28, in hook
self.config['name'], self.config.get('version', None))
  File 
"/home/iwienand/programs/oslo.config/.tox/pep8/lib/python2.7/site-packages/pbr/packaging.py",
 line 554, in get_version
raise Exception("Versioning for this project requires either an sdist"
Exception: Versioning for this project requires either an sdist tarball, or 
access to an upstream git repository. Are you sure that git is installed?
---

I proposed [1] to oslo.config to basically avoid the sdist phase.
This seems to be what happens elsewhere.

I started writing a bug for the real issue, but it's not clear to me
where it belongs.  It seems like distutils should error for "unknown
distribution option".  But then setuptools seems be ignoring the
"setup_requires=['pbr']" line in the config.  But maybe tox should be
using pip to install rather than setup.py.

So if any setuptools/distribute/pip/pbr/tox people want to point me to
who should own the problem, happy to chase it up...

-i

[1] https://review.openstack.org/#/c/145119/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Devstack plugins and gate testing

2015-01-12 Thread Ian Wienand
Hi,

With [1] merged, we now have people working on creating external
plugins for devstack.

I worry about use of arbitrary external locations as plugins for gate
jobs.  If a plugin is hosted externally (github, bitbucket, etc) we
are introducing a whole host of problems when it is used as a gate
job.  Lack of CI testing for proposed changes, uptime of the remote
end, ability to accept contributions, lack of administrative access
and consequent ability to recover from bad merges are a few.

I would propose we agree that plugins used for gate testing should be
hosted in stackforge unless there are very compelling reasons
otherwise.

To that end, I've proposed [2] as some concrete wording.  If we agree,
I could add some sort of lint for this to project-config testing.

Thanks,

-i

[1] https://review.openstack.org/#/c/142805/ (Implement devstack external 
plugins)
[2] https://review.openstack.org/#/c/146679/ (Document use of plugins for gate 
jobs)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Consolidating efforts around Fedora/Centos gate job

2014-04-10 Thread Ian Wienand
Hi,

To summarize recent discussions, nobody is opposed in general to
having Fedora / Centos included in the gate.  However, it raises a
number of "big" questions : which job(s) to run on Fedora, where does
the quota for extra jobs come from, how do we get the job on multiple
providers, how stable will it be, how will we handle new releases,
centos v fedora, etc.

I think we agreed in [1] that the best thing to do is to start small,
get some experience with multiple platforms and grow from there.  Thus
the decision to target a single job to test just incoming devstack
changes on Fedora 20.  This is a very moderate number of changes, so
adding a separate test will not have a huge impact on resources.

Evidence points to this being a good point to start.  People
submitting to devstack might have noticed comments from "redhatci"
like [2] which reports runs of their change on a variety of rpm-based
distros.  Fedora 20 has been very stable, so we should not have many
issues.  Making sure it stays stable is very useful to build on for
future gate jobs.

I believe we decided that to make a non-voting job we could just focus
on running on Rackspace and avoid the issues of older fedora images on
hp cloud.  Longer term, either a new hp cloud version comes, or DIB
builds the fedora images ... either way we have a path to upgrading it
to a voting job in time.  Another proposal was to use the ooo cloud,
but dprince feels that is probably better kept separate.

Then we have the question of the nodepool setup scripts working on
F20.  I just tested the setup scripts from [3] and it all seems to
work on a fresh f20 cloud image.  I think this is due to kchamart,
peila2 and others who've fixed parts of this before.

So, is there:

 1) anything blocking having f20 in the nodepool?
 2) anything blocking a simple, non-voting job to test devstack
changes on f20?

Thanks,

-i

[1] 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-08-19.01.log.html#l-89
[2] http://people.redhat.com/~iwienand/86310/
[3] 
https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/nodepool/scripts

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] local.conf ini file setting issues

2014-10-06 Thread Ian Wienand

Hi,

Rather than adding more MAGIC_VARIABLE="foo" variables to devstack
that really only add lines to config files, I've been asking people to
add them to local.conf and provide additional corresponding
documentation if required.

However increased use has exposed some issues, now covered by several
reviews.  There seem to be three issues:

 1. preservation of quotes
 2. splitting of arguments with an '=' in them
 3. multi-line support

We have several reviews not dependent on each other but which will all
conflict.  If we agree, I think we should

 1. merge [1] to handle quotes
 2. merge [2] to handle '='s
 3. extract just multi-line support from [3]

All include test-cases, which should increase confidence

--

I did consider re-implementing a python tool to handle this; there
were some things that made me think twice.  Firstly ini settings in
local.conf are expected to expand shell-vars (path to neutron plugin
configs, etc) so you end up shelling out anyway.  Secondly
ConfigParser doesn't like "[[foo]]" as a section name (drops the
trailing ], maybe a bug) so you have to start playing games there.
Using a non-standard library (oslo.config) would be a big change to
devstack's current usage of dependencies.

-i

[1] https://review.openstack.org/124227
[2] https://review.openstack.org/124467/
[3] https://review.openstack.org/124502

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [bashate] towards "inbox zero" on bashate changes, release?

2014-10-13 Thread Ian Wienand
Hi,

I took the liberty of rebasing and approving the fairly obvious and
already +1'd bashate changes today that had been sitting for quite a
while.  What's left is minimal and fall into three categories

1) changes for auto-detection.  IMO, we should drop all these and just
   leave bashate as taking a list of files to check, and let
   test-harnesses fix it.  Everyone using it at the moment seems fine
   without them

 https://review.openstack.org/110966 (Introduce directories as possible 
arguements)
 https://review.openstack.org/126842 (Add possibility to load checks 
automatically)
 https://review.openstack.org/117772 (Implement .bashateignore handling)
 https://review.openstack.org/113892 (Remove hidden directories from discover)

2) status-quo changes requiring IMO greater justification

 https://review.openstack.org/126853 (Small clean-up)
 https://review.openstack.org/126842 (Add possibility to load checks 
automatically)
 https://review.openstack.org/127473 (Put all messages into separate package)

3) if/then checking; IMO change is a minor regression

 https://review.openstack.org/127052 (Fixed "if-then" check when "then" is not 
in the end of line)

Maybe it is time for a release?  One thing; does the pre-release check
run over TOT devstack and ensure there are no errors?  We don't want
to release and then 10 minutes later gate jobs start failing.

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FreeBSD host support

2014-10-27 Thread Ian Wienand

I do not want to hijack this thread with Solaris specific questions,
but this point is a major sticking point for us too.  To my
knowledge, modifying devstack for anything not RHEL/Ubuntu is out of
the question (they're not interested in supporting other OSes).


I think if the question is "does devstack want a review that adds the
bash equivalent of #ifdef SOLARIS over everything and happened to
sort-of work for someone once, with no CI and a guarantee of
instantaneous bit-rot" the answer is predictable.

If the question is more "does devstack want cleaner abstractions
between platform and deployment backed up by CI and active
involvement" I can not see that would be a bad thing.

For mine, integrating with CI would be the *first* step.

Until infrastructure was ready and able to run the devstack-gate
scripts on Solaris/FreeBSD/... nodes and devstack had a non-voting job
I personally would be very negative about merging changes for support.
Frankly I'm not going to be building and maintaining my own
FreeBSD/Solaris systems and hand-testing patches for them, so seeing
something happening in CI is the only way I could be sure any proposed
changes actually work before I spend time reviewing them.

Even if devstack is not the right vehicle, integrating these platforms
to the point that "git review" can run some sort of test -- anything
really -- is going to be much more compelling for someone to +2

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bashate] towards "inbox zero" on bashate changes, release?

2014-10-28 Thread Ian Wienand
On 10/14/2014 04:03 PM, Ian Wienand wrote:
> Maybe it is time for a release?  One thing; does the pre-release check
> run over TOT devstack and ensure there are no errors?  We don't want
> to release and then 10 minutes later gate jobs start failing.

Just to loop back on this ...

Our main goal here should be to get [1] merged so we don't further
regress on any checks.

TOT bashate currently passes against devstack; so we can release as
is.  Two extra changes we might consider as useful in a release:

 - https://review.openstack.org/131611 (Remove automagic file finder)
   I think we've agreed to just let test-frameworks find their own
   files, so get rid of this

 - https://review.openstack.org/131616 (Add man page)
   Doesn't say much, but it can't hurt

As future work, we can do things like add warning-level checks,
automatically generate the documentation on errors being checked, etc.

-i

[1] https://review.openstack.org/128809 (Fix up file-matching in bashate tox 
test)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci pass through turing complete config options?

2014-11-06 Thread Ian Wienand

On 10/29/2014 12:42 AM, Doug Hellmann wrote:

Another way to do this, which has been used in some other projects,
is to define one option for a list of “names” of things, and use
those names to make groups with each field


I've proposed that in [1].  I look forward to some -1's :)


OTOH, oslo.config is not the only way we have to support
configuration. This looks like a good example of settings that are
more complex than what oslo.config is meant to handle, and that
might be better served in a separate file with the location of that
file specified in an oslo.config option.


My personal opinion is that yet-another-config-file in possibly
yet-another-format is just a few lines of code, but has a pretty high
cost for packagers, testers, admins, etc.  So I feel like that's
probably a last-resort.

-i

[1] https://review.openstack.org/133138

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] "recheck no bug" and comment

2014-07-24 Thread Ian Wienand

On 07/16/2014 11:15 PM, Alexis Lee wrote:

What do you think about allowing some text after the words "recheck no
bug"?


I think this is a good idea; I am often away from a change for a bit,
something happens in-between and Jenkins fails it, but chasing it down
days later is fairly pointless given how fast things move.

It would be nice if I could indicate "I thought about this".  In fact,
there might be an argument for *requiring* a reason

I proposed [1] to allow this

-i

[1] https://review.openstack.org/#/c/109492/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Python dependencies: PyPI vs distro packages

2013-08-05 Thread Ian Wienand
On Mon, Aug 05, 2013 at 12:03:07PM -0500, Dean Troyer wrote:
> * proposals to use a tool to automatically decide between package and
> PyPI (harlowja, sdague):  this works well on the surface, but anything
> that does not take in to account the dependencies in these packages
> going BOTH ways is going to fail.  For example: on RHEL6 setuptools is
> 0.6.10, we want 0.9.8 (the merged release w/distribute).  Removing
> python-setuptools will also remove python-nose, numpy and other
> packages depending on what is installed.  So fine, those can be
> re-installed with pip.  But a) we don't want to rebuild numpy (just
> bear with me here), and b) the packaged python-nose 0.10.4 meets the
> version requirement in requirements.txt so the package will be
> re-installed, bringing with it python-setuptools 0.6.10 overwriting
> the pip installation of 0.9.8.

I think Anvil is working with the package management system so that
scenario doesn't happen.  The "fine, those can be re-installed with
pip" bit is where the problem occurs.

The Anvil process is, as I understand it:

 1) parse requirements.txt
 2) see what can be satisfied via yum
 3) pip download the rest
 4) remove downloaded dependencies that are satisfied by yum
 5) make rpms of now remaining pip downloads
 6) install the *whole lot*

The "whole lot" bit is important, because you can't have conflicts
there.  Say requirements.txt brings in setuptools-0.9.8; Anvil will
create a python-setuptools 0.9.8 package.  If rpm-packaged nose relies
*exactly* python-setuptools@0.6.10, there will be a conflict -- I
think the installation would fail to complete.  But likely, that
package doesn't care and gets it dep satisfied by 0.9.8 [1]

Because you're not using pip to install directly, you don't have this
confusion around who owns files in /usr/lib/python2.6/site-packages
and have rpm or pip overwriting each other -- RPM owns all the files
and that's that.

> Removing python-setuptools will also remove python-nose, numpy and
> other packages depending on what is installed.

Nowhere is python-setuptools removed; just upgraded.

Recently trying out Anvil, it seems to have the correct solution to my
mind.

-i

[1] From a quick look at Anvil I don't think it would handle this
situation, which is probably unsolvable (if a rpm-package wants one
version and devstack wants another, and it all has to be in
/usr/lib/python2.6/site-packages, then *someone* is going to lose).
But I don't think <= or == dependencies in rpms are too common, so you
can just drop in the new version and hope it remains backwards
compatible :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Python dependencies: PyPI vs distro packages

2013-08-06 Thread Ian Wienand
On Mon, Aug 05, 2013 at 03:37:24PM -0700, Jay Buffington wrote:
> I used Anvil for the first three months, but it required constant
> updating of dependency versions and it didn't support quantum.

What do you mean by "updating" here?  The rpm packages being updated
causing a lot of churn, or some manual intervention required on your
behalf?

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] Python dependencies: PyPI vs distro packages

2013-08-08 Thread Ian Wienand
On Thu, Aug 08, 2013 at 10:10:09AM -0300, Monty Taylor wrote:
> I don't think we will gain much by auto-generating packages.

What really is the difference between devstack auto-generating a
package and having a human basically doing the same thing and sticking
it in a repo?  It just seems unreliable.

Using the "anvil" approach, it seems pacakges that can be souced from
repos (either main, or add-on like RDO or the Ubuntu equivalents) will
be, and others built automatically.

Over time, the latest releases of RDO (or any other OpenStack
distribution) should be converging on devstack -- if it's going to
ship it will need those dependencies packaged eventually.  In fact,
devstack spitting out "I needed XYZ to build and you don't have them"
is probably very helpful to distributors?

If it doesn't, then you will be tending towards having every
dependency automatically built (your mini-distro scenario, I guess).
At some point, I think it's fair to say "hey, your distro doesn't
appear to be doing any useful distribution for OpenStack; you're not
supported until you catch up".

-i

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Skipping tests in tempest via config file

2013-08-13 Thread Ian Wienand
Hi,

I proposed a change to tempest that skips tests based on a config file
directive [1].  Reviews were inconclusive and it was requested the
idea be discussed more widely.

Of course issues should go upstream first.  However, sometimes test
failures are triaged to a local/platform problem and it is preferable
to keep everything else running by skipping the problematic tests
while its being worked on.

My perspective is one of running tempest in a mixed CI environment
with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
doesn't do the setUpClass calls required by temptest) and nose
upstream has some quirks that make it hard to work with the tempest
test layout [2].

Having a common place in the temptest config to set these skips is
more convienent than having to deal with the multiple testing
environments.

Another proposal is to have a separate JSON file of skipped tests.  I
don't feel strongly but it does seem like another config file.

-i

[1] https://review.openstack.org/#/c/39417/
[2] https://github.com/nose-devs/nose/pull/717

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] puppet & heat config file

2013-08-25 Thread Ian Wienand
Hi,

The current heat puppet modules don't work to create the heat config
file [1]

My first attempt [2] created separate config files for each heat
component.  It was pointed out that configuration had been
consolidated into a single file [3].  My second attempt [4] did this,
but consensus seems to be lacking that this will work.

As Mathieu alludes to, it does seem that there is a critical problem
with the single config file in that it is not possible to specify
separate bind_port's to individual daemons [5].  The current TOT
config files [6] don't seem to provide a clear example to work from?

What output should the puppet modules be producing?  Would it make
sense for them to create the multiple-configuration-file scenario for
now, and migrate to the single-configuration-file at some future
point; since presumably heat will remain backwards compatible for some
time?

-i

[1] https://bugs.launchpad.net/puppet-heat/+bug/1214824
[2] https://review.openstack.org/#/c/43229/
[3] https://review.openstack.org/#/c/39980/
[4] https://review.openstack.org/#/c/43406/
[5] https://github.com/openstack/heat/blob/master/heat/common/wsgi.py#L53
[6] https://github.com/openstack/heat/tree/master/etc/heat

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][heat] Which repo to use in docs -- git.openstack.org or github.com?

2015-10-20 Thread Ian Wienand

On 10/21/2015 04:20 AM, Christopher Aedo wrote:

On the other hand, the fact that github renders the README nicely
(including images) leads to a much more inviting first impression.


I think it's nice to just have a standard docs target, even if it just
includes the README; [1] is an example.  Then you've got the framework
to actually document something in the future, should you wish.

[1] https://review.openstack.org/#/c/228694/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DevStack errors...

2015-11-02 Thread Ian Wienand

On 11/03/2015 10:51 AM, Thales wrote:

I'm trying to get DevStack to work, but am getting errors.  Is this
a good list to ask questions for this?  I can't seem to get answers
anywhere I look.  I tried the openstack list, but it kind of moves
slow.


Best to file a bug in launchpad, and *attach the full log*.  Very
often with devstack the root cause of a problem happens well before
the issue manifests itself as an error.

I won't say every bug gets attention immediately, but it's something
useful you can ping people with in irc, etc.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Ian Wienand

On 11/05/2015 01:43 AM, Matthew Thode wrote:

python wheel repo could help maybe?


So I think we've (i.e. greghaynes) got that mostly in place, we just
got a bit side-tracked.

[1] adds mirror slaves, that build the wheels using pypi-mirror [2],
and then [3] adds the jobs.

This should give us wheels of everything in requirements

I think this could be enhanced by using bindep to install
build-requirements on the mirrors; in chat we tossed around some ideas
of making this a puppet provider, etc.

-i

[1] https://review.openstack.org/165240
[2] https://git.openstack.org/cgit/openstack-infra/pypi-mirror
[3] https://review.openstack.org/164927
[4] https://git.openstack.org/cgit/openstack-infra/bindep

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] package dependency installs after recent clean-ups

2015-11-08 Thread Ian Wienand

devstack maintainers, et al.

If your CI is failing with missing packages (xml bindings failing to
build, postgres bindings, etc), it may be due to some of the issues
covered below.

I believe some of the recent changes around letting pip build wheels
and cleaning up some of the package dependencies have revealed that
devstack is not quite installing build pre-reqs as we thought it was.

[1] fixes things so we actually install the packages listed in
"general"

[2] is a further cleanup of the "devlib" packages, which are no longer
installed since we removed tools/build_wheels.sh

I believe a combination of what was removed in [3] and [2] was hiding
the missing installs from [1].  Thus we can clean up some of the
dependencies via [4].

Stacked on that are some less important further clean-ups

Reviews appreciated, because it seems to have broken some CI, see [5]

-i

[1] https://review.openstack.org/242891
[2] https://review.openstack.org/242894
[3] https://review.openstack.org/242895
[4] https://review.openstack.org/242895
[5] https://review.openstack.org/242536

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra] Getting a bleeding edge libvirt gate job running

2015-11-17 Thread Ian Wienand

On 11/18/2015 06:10 AM, Markus Zoeller wrote:
 This

was a trigger to see if we can create a gate job which utilizes the
latest, bleeding edge, version of libvirt to test such features.



* Is already someone working on something like that and I missed it?


I believe the closest we have got is probably [1]; pulling apart some
of the comments there might be helpful

In short, a devstack plugin that installs the latest libvirt is
probably the way to go.

Ongoing, the only issue I see is that we do things to base devstack
that conflict/break this plugin, as we are more-or-less assuming the
distro version (see things like [2], stuff like this comes up every
now and then).


* If 'no', is there already a repo which contains the very latest
   libvirt builds which we can utilize?


For Fedora, there is virt-preview  [3] at least

-i

[1] https://review.openstack.org/#/c/108714/
[2] https://review.openstack.org/246501
[3] https://fedoraproject.org/wiki/Virtualization_Preview_Repository

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] How to source openrc?

2015-11-22 Thread Ian Wienand
On 11/23/2015 03:17 PM, Wilence Yao wrote:
> source openrc admin admin
> openrc:90: unknown condition: -v

I'm pretty sure you're using zsh -- we only support bash (stack.sh
checks for bash, maybe we should add that to openrc)

Anyway, you can probably follow [1] to source it; other things might
break though

-i

[1] 
http://docs.openstack.org/developer/devstack/faq.html#can-i-at-least-source-openrc-with-zsh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][DIB] diskimage-builder and python 2/3 compatibility

2015-12-09 Thread Ian Wienand
On 12/09/2015 07:15 AM, Gregory Haynes wrote:
> We ran in to a couple issues adding Fedora 23 support to
> diskimage-builder caused by python2 not being installed by default.
> This can be solved pretty easily by installing python2, but given that
> this is eventually where all our supported distros will end up I would
> like to find a better long term solution (one that allows us to make
> images which have the same python installed that the distro ships by
> default).

So I wonder if we're maybe hitting premature optimisation with this

> We use +x and a #! to specify a python
> interpreter, but this needs to be python3 on distros which do not ship a
> python2, and python elsewhere.

> Create a symlink in the chroot from /usr/local/bin/dib-python to
> whatever the apropriate python executable is for that distro.

This is a problem for anyone wanting to ship a script that "just
works" across platforms.  I found a similar discussion about a python
launcher at [1] which covers most points and is more or less what
is described above.

I feel like contribution to some sort of global effort in this regard
might be the best way forward, and then ensure dib uses it.

-i

[1] https://mail.python.org/pipermail/linux-sig/2015-October/00.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [diskimage-builder] Howto refactor?

2016-05-31 Thread Ian Wienand

On 06/01/2016 02:10 PM, Andre Florath wrote:

My long term goal is, to add some functionality to the DIB's block
device layer, like to be able to use multiple partitions, logical
volumes and mount points.


Some thoughts...

There's great specific info in the readme's of the changes you posted
... but I'm missing a single big picture context of what you want to
build on-top of all this and how the bits fit into it.  We don't have
a formalised spec or blueprint process, but something that someone who
knows *nothing* about this can read and follow through will help; I'd
suggest an etherpad, but anything really.  At this point, you are
probably the world-expert on dib's block device layer, you just need
to bring the rest of us along :)

There seems to be a few bits that are useful outside the refactor.
Formalising python elements, extra cleanup phases, dib-run-parts
fixes, etc.  Splitting these out, we can get them in quicker and it
reduces the cognitive load for the rest.  I'd start there.

#openstack-infra is probably fine to discuss this too, as other dib
 knowledgeable people hang out there.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] create periodic-ci-reports mailing-list

2016-04-13 Thread Ian Wienand

On 04/14/2016 03:22 AM, Jeremy Stanley wrote:

Mentioned in IRC as well, but would an RSS/ATOM feed be a good
compromise between active notification and focus on the dashboard as
an entry point to researching job failures?


For myself, simply ordering by date on the log page as per [1] would
make it one step easier to write a local cron job to pick up the
latest.

-i

[1] https://review.openstack.org/#/c/301989/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][releases] Remove diskimage-builder from releases

2016-04-18 Thread Ian Wienand

Hi,

diskimage-builder has fallen under the "centralised release tagging"
mechanism [1], presumably because it is under tripleo.  I'd like to
propose that we don't do that.

Firstly, dib doesn't have any branches to manage.

dib's other main function is as part of the daily CI image builds.
This means to get a fix into the CI images in a somewhat timely
fashion, we approve the changes and make sure our releases happen
before 14:00 UTC, and monitor the build results closely in nodepool.

I don't expect the stable release team to be involved with all this;
but if we miss windows then we're left either going to efforts getting
one of a handful of people with permissions to do manual rebuilds or
waiting yet another day to get something fixed.  Add some timezones
into this, and simple fixes are taking many days to get into builds.
Thus adding points where we can extend this by another 24 hours
really, well, sucks.

I have previously suggested running dib from git directly to avoid the
release shuffle, but it was felt this was not the way to go [2].  I've
proposed putting the release group back with [3] and cleaning up with
[4].

Thanks,

-i

[1] https://review.openstack.org/298866
[2] https://review.openstack.org/283877
[3] https://review.openstack.org/307531
[4] https://review.openstack.org/307534

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][releases] Remove diskimage-builder from releases

2016-04-19 Thread Ian Wienand

On 04/20/2016 03:25 AM, Doug Hellmann wrote:

It's not just about control, it's also about communication. One of
the most frequent refrains we hear is "what is OpenStack", and one
way we're trying to answer that is to publicize all of the things
we release through releases.openstack.org.


So for dib, this is mostly about documentation?

We don't have the issues around stable branches mentioned in the
readme, nor do we worry about the requirements/constraints (proposal
bot has always been sufficient?).


Centralizing tagging also helps us ensure consistent versioning
rules, good timing, good release announcements, etc.


We so far haven't had issues keeping the version number straight.

As mentioned, the timing has extra constraints due to use in periodic
infra jobs that I don't think the release team want to be involved
with.  It's not like the release team will be going through the
changes in a new release and deciding if they seem OK or not (although
they're welcome to do dib reviews, before things get committed :) So I
don't see what timing constraints will be monitored in this case.

When you look at this from my point of view, dib was left/is in an
unreleasable state that I've had to clean up [1], we've now missed yet
another day's build [2] and I'm not sure what's different except I now
have to add probably 2 days latency to the process of getting fixes
out there.

To try and be constructive : is what we want a proposal-bot job that
polls for the latest release and adds it to the diskimage-builder.yaml
file?  That seems to cover the documentation component of this.

Or, if you want to give diskimage-builder-release group permissions on
the repo, so we can +2 changes on the diskimage-builder.yaml file, we
could do that. [3]

-i

[1] https://review.openstack.org/#/c/306925/
[2] https://review.openstack.org/#/c/307542/
[3] my actual expectation of this happening is about zero

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][releases] Remove diskimage-builder from releases

2016-04-19 Thread Ian Wienand

On 04/20/2016 06:09 AM, Fox, Kevin M wrote:

I've seen dib updated and broken things.



I've seen dib elements updated and things broke (centos6 removal in
particular hurt.)


By the time it gets to a release, however, anything we've broken is
already baked in.  Any changes in there have already passed review and
whatever CI we have.

(not to say we can't do better CI to break stuff less.  But that's
outside the release team's responsibility)

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Jobs failing : "No matching distribution found for "

2016-05-10 Thread Ian Wienand
So it seems the just released pip 8.1.2 has brought in a new version
of setuptools with it, which creates canonical names per [1] by
replacing "." with "-".

The upshot is that pip is now looking for the wrong name on our local
mirrors.  e.g.

---
 $ pip --version
pip 8.1.2 from /tmp/foo/lib/python2.7/site-packages (python 2.7)
$ pip --verbose  install --trusted-host mirror.ord.rax.openstack.org -i 
http://mirror.ord.rax.openstack.org/pypi/simple 'oslo.config>=3.9.0'
Collecting oslo.config>=3.9.0
  1 location(s) to search for versions of oslo.config:
  * http://mirror.ord.rax.openstack.org/pypi/simple/oslo-config/
  Getting page http://mirror.ord.rax.openstack.org/pypi/simple/oslo-config/
  Starting new HTTP connection (1): mirror.ord.rax.openstack.org
  "GET /pypi/simple/oslo-config/ HTTP/1.1" 404 222
  Could not fetch URL 
http://mirror.ord.rax.openstack.org/pypi/simple/oslo-config/: 404 Client Error: 
Not Found for url: http://mirror.ord.rax.openstack.org/pypi/simple/oslo-config/ 
- skipping
  Could not find a version that satisfies the requirement oslo.config>=3.9.0 
(from versions: )
---

(note olso-config, not oslo.config).  Compare to

---
$ pip --verbose install --trusted-host mirror.ord.rax.openstack.org -i 
http://mirror.ord.rax.openstack.org/pypi/simple 'oslo.config>=3.9.0'
You are using pip version 6.0.8, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting oslo.config>=3.9.0
  Getting page http://mirror.ord.rax.openstack.org/pypi/simple/oslo.config/
  Starting new HTTP connection (1): mirror.ord.rax.openstack.org
  "GET /pypi/simple/oslo.config/ HTTP/1.1" 200 2491
---

I think infra jobs that run on bare-precise are hitting this
currently, because that image was just built.  Other jobs *might* be
isolated from this for a bit, until the new pip gets out there on
images, but "winter is coming", as they say...

There is [2] available to make bandersnatch use the new names.
However, I wonder if this might have the effect of breaking the
mirrors for old versions of pip that ask for the "."?

pypi proper does not seem affected, just our mirrors.

I think probably working with bandersnatch to get a fixed version ASAP
is probably the best way forward, rather than us trying to pin to old
pip versions.

-i

[1] https://www.python.org/dev/peps/pep-0503/
[2] 
https://bitbucket.org/pypa/bandersnatch/pull-requests/20/fully-implement-pep-503-normalization/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][QA] Changing logging format to match documented defaults

2015-12-22 Thread Ian Wienand
On 12/23/2015 05:55 AM, Ronald Bradford wrote:
> I have observed that devstack uses custom logging formatting that
> differs from the documented defaults. An example is for nova.  which
> is defined in [1]

The original point mentioned there of using "*_name" for extra
verbosity still seems relevant [1]

> logging_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s
> %(name)s [%(request_id)s *%(user_identity)s*] %(instance)s%(message)s

user_identity is still just "{user} {tenant} ..." (which is id's, I
believe?) [2].

> This logging variable is also defined differently across 4
> projects (nova, cinder, keystone and glance) so ultimately the goal would
> be to ensure they may present documented configuration defaults.

Since it's hard enough to debug already when tempest gets going with
multiple users/tenants, I think keeping the verbose logging is worth
it.  However, if a bunch of copy-paste setting of this can be
refactored, I'm sure we'd consider it favourably.

-i

[1] 
http://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=6f13ba33d84b95808fc2a7672f332c1f0494e741
[2] 
https://git.openstack.org/cgit/openstack/oslo.context/tree/oslo_context/context.py#n43

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pip 8 no longer over-installs system packages [was: Gate failure]

2016-01-19 Thread Ian Wienand

On 01/20/2016 12:53 PM, Robert Collins wrote:

I suspect we'll see fallout in unit tests too, once new images are
built.


If the images can build ...

This was marked as deprecated, I understand, but the removal is very
unfortunate [1] considering it's really just a
shoot-yourself-in-the-foot operation.

From the latest runs, on ubuntu we are using pip to over-install
system packages of

 six
 requests
 netaddr
 PyYAML
 PyOpenSSL
 jsonpointer
 urllib3
 PyYAML
 pyOpenSSL

On CentOS it is

 requests
 PyYAML
 enum34
 ipaddress
 numpy

The problem is that we can't remove these system packages with the
package-manager from the base images, because other packages we need
rely on having them installed.  Just removing the directory as pip
used to do has been enough to keep things going.

So, what to do?  We can't stay at pip < 8 forever, because I'm sure
there will be some pip problem we need to patch soon enough.

Presume we can't remove the system python-* packages for these tools
because other bits of the system rely on it.  We've been down the path
of creating dummy packages before, I think ... that never got very
far.

I really don't know how with the world of devstack plugins we'd deploy
a strict global virtualenv.  Heaven knows what "creative" things
plugins are going to come up with if someone hits this (not that I've
proposed anything elegant)...

Would pip accept maybe a environment flag to restore the old ability
to remove based on the egg-info?  Is it really so bad given what
devstack is doing?

-i

[1] 
https://github.com/pypa/pip/commit/6afc718307fea36b9ffddd376c1395ee1061795c


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pip 8 no longer over-installs system packages [was: Gate failure]

2016-01-19 Thread Ian Wienand

On 01/20/2016 04:14 PM, Ian Wienand wrote:

On 01/20/2016 12:53 PM, Robert Collins wrote:

I suspect we'll see fallout in unit tests too, once new images are
built.


If the images can build ...


yeah, dib is not happy about this either


Just removing the directory as pip
used to do has been enough to keep things going.


To be clear, what happens is that pip removes the egg-info file and
then overwrites the system installed files.  This is, of course,
unsafe, but we generally get away with it.


Presume we can't remove the system python-* packages for these tools
because other bits of the system rely on it.  We've been down the path
of creating dummy packages before, I think ... that never got very
far.


Another option would be for us to just keep a list of egg-info files
to remove within devstack and more or less do what pip was doing
before.


Would pip accept maybe a environment flag to restore the old ability
to remove based on the egg-info?  Is it really so bad given what
devstack is doing?


I proposed a revert in [1] which I'm sure people will have opinions
on.

[1] https://github.com/pypa/pip/pull/3389

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] pip 8 no longer over-installs system packages [was: Gate failure]

2016-01-20 Thread Ian Wienand
On 01/20/2016 06:21 PM, Andreas Jaeger wrote:
> Now docs, pep8, and python27 are broken as well here:
> https://review.openstack.org/#/c/268687/

Ok, so this is a weird one.  On trusty, argparse is sort of in, and
sort of out of the virtualenv.  I think it has to do with [1]

---
(test)ubuntu@trusty:/tmp$ pip install argparse
Requirement already satisfied (use --upgrade to upgrade): argparse in 
/usr/lib/python2.7
Cleaning up...
(test)ubuntu@trusty:/tmp$ pip install --upgrade argparse
Downloading/unpacking argparse from 
https://pypi.python.org/packages/2.7/a/argparse/argparse-1.4.0-py2.py3-none-any.whl#md5=c37216a954c8669054e2b2c54853dd49
  Downloading argparse-1.4.0-py2.py3-none-any.whl
Installing collected packages: argparse
  Found existing installation: argparse 1.2.1
Not uninstalling argparse at /usr/lib/python2.7, outside environment 
/tmp/test
Successfully installed argparse
Cleaning up...
---

However, this has now turned into an error

---
(test)ubuntu@trusty:/tmp$ pip install --upgrade pip
Downloading/unpacking pip from 
https://pypi.python.org/packages/py2.py3/p/pip/pip-8.0.0-py2.py3-none-any.whl#md5=7b1da5eba510e1631791dcf300657916
  Downloading pip-8.0.0-py2.py3-none-any.whl (1.2MB): 1.2MB downloaded
Installing collected packages: pip
  Found existing installation: pip 1.5.4
Uninstalling pip:
  Successfully uninstalled pip
Successfully installed pip
Cleaning up...
(test)ubuntu@trusty:/tmp$ pip install -U argparse
Collecting argparse
  Downloading argparse-1.4.0-py2.py3-none-any.whl
Installing collected packages: argparse
  Found existing installation: argparse 1.2.1
Detected a distutils installed project ('argparse') which we cannot uninstall. 
The metadata provided by distutils does not contain a list of files which have 
been installed, so pip does not know which files to uninstall.
---

So, yeah :(

-i

[1] https://github.com/pypa/pip/issues/1570

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Make libguestfs available on pypi

2015-07-29 Thread Ian Wienand
On 07/30/2015 04:55 AM, Kris G. Lindgren wrote:
> The following bug has already been created over a year ago [1], and
> it looks like most of the work on the libguestfs side is already
> done [2].  It seems something about a complaint of licensing per
> the bug report.

I think best to follow up in that bug

On the license front, to quote from an internal email I
saw fly-by about the pypi sign-up terms under question from Nick on
20-Jul-2015:

---
 Van started drafting some amendments back in February:
 https://bitbucket.org/vanl/pypi/commits/all

 Key changes are here:
 
https://bitbucket.org/vanl/pypi/commits/8df8e0295c0a719e963f7c3ce430284179f03b1f

 Further clarifications at
 
https://bitbucket.org/vanl/pypi/commits/734b1f49776d1f7f5d0671306f61a90aad713e5d
 and
 
https://bitbucket.org/vanl/pypi/commits/0e94b169e81306607936912ecc3c42312aac5eb7

 I'll ping the Board list about next steps in getting those amendments
 formally approved and submitted as a PR to the main PyPI repo.
---

So it is being looked at, but I'm not sure of the time-frame.

-i

> [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1075594

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] No image was created after running "nodepoold -d $DAEMON_ARGS"

2015-07-29 Thread Ian Wienand

On 07/30/2015 12:34 PM, Xie, Xianshan wrote:

DEBUG nodepool.NodePool: Finished node launch calculation
INFO nodepool.DiskImageBuilderThread: Running disk-image-create ...


So, nothing after this?

Nothing jumps out; first thought was to check if disk-image-create is
running and go from there.  If it's stuck, pointing strace() at it
might give a clue.

Full logs and config help, but my best suggestion is to
jump on irc in #openstack-infra; during USA awake hours you'll get the
best response.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] No image was created after running "nodepoold -d $DAEMON_ARGS"

2015-07-29 Thread Ian Wienand

On 07/30/2015 01:51 PM, Ian Wienand wrote:

Nothing jumps out


Something I just thought of that has caused problems is check your
users; I think running things by hand as root and then switching back
to a unprivileged user can cause problems as the second run hits
things it can't modify.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] CI System is broken

2015-07-29 Thread Ian Wienand

On 07/29/2015 07:33 PM, Andreas Jaeger wrote:

Currently Zuul is stuck and not processing any events at all, thus no
jobs are checked or gated.


I think whatever happened has happened again; if jhesketh is out it
might be a few hours from this email before people with the right
access are back online to fix it.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Fix order of arguments in assertEqual

2015-09-28 Thread Ian Wienand

On 09/24/2015 08:18 PM, Andrey Kurilin wrote:

I agree that wrong order of arguments misleads while debugging errors, BUT
how we can prevent regression?


Spell it out and use keyword args?

  assertEqual(expected="foo", observed=...)

is pretty hard to mess up

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-14 Thread Ian Wienand

On 10/14/2015 11:08 AM, Zaro wrote:

We are soliciting feedback so please let us know what you think.


Since you asked :)

Mostly it's just different which is fine.  Two things I noticed when
playing around, shown in [1]

When reviewing, the order "-1 0 +1" is kind of counter-intuitive to
the usual dialog layout of the "most positive" thing on the left;
e.g. [OK] [Cancel] dialogs.  I just found it odd to interact with.

Maybe people default themselves to -1 though :)

The colours for +1/-1 seem to be missing.  You've got to think a lot
more to parse the +1/-1 rather than just glance at the colours.

-i

[1] http://imgur.com/QWXOMen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] What's required to accept a new distro rlease?

2015-05-14 Thread Ian Wienand
On 05/15/2015 01:05 PM, Tony Breeds wrote:
> I'm wondering what are the requirements for accepting something
> like:

> -if [[ ! ${DISTRO} =~ 
> (precise|trusty|7.0|wheezy|sid|testing|jessie|f20|f21|rhel7) ]]; then
> +if [[ ! ${DISTRO} =~ 
> (precise|trusty|vivid|7.0|wheezy|sid|testing|jessie|f20|f21|rhel7) ]]; then

Having a CI story makes it a no-brainer...  anyone working on getting
vivid nodes up in nodepool?  In the past we've generally accepted if
people have a convincing story that it works (i.e. they're using it
and tempest is working, etc).

Honestly I doubt 7.0|wheezy|sid|testing|jessie all work anyway -- they
don't have any CI (I know of) -- so we're probably overselling it
anyway.

> I figure there must be more to it as utopic was never added.

I think that was approved, failed a test and was never re-merged [1].
So I guess it's more nobody cared?

-i

[1] https://review.openstack.org/#/c/98844/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Ian Wienand

On 06/03/2015 07:24 AM, Boris Pavlovic wrote:

Really it's hard to find cores that understand whole project, but
it's quite simple to find people that can maintain subsystems of
project.


  We are made wise not by the recollection of our past, but by the
  responsibility for our future.
   - George Bernard Shaw

Less authorities, mini-kingdoms and
turing-complete-rule-based-gerrit-subtree-git-commit-enforcement; more 
empowerment of responsible developers and building trust.


-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] issues with beaker/centos7 job

2015-07-01 Thread Ian Wienand
On 07/02/2015 11:52 AM, Emilien Macchi wrote:
> details. I tried to install deltarpm before, but it says : "No Presto
> metadata available for rdo-release".

So I looked into this with Gilles, and that error is a red-herring
(it's just saying the rdo repos don't create the presto/deltarpm
stuff); the real issue is when python-requests fails to install [1] a
bit later due to [2] (mentioned in comments of [3]).

So this is really a upstream packaging issue and not an issue with the
nodepool images.

-i

[1] 
http://logs.openstack.org/83/197783/8/check/gate-puppet-neutron-puppet-beaker-rspec-upgrade-dsvm-centos7/9f628b4/console.html
[2] https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1212145
[3] https://review.openstack.org/#/c/197782/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-11 Thread Ian Wienand
On 03/11/2015 08:10 AM, Robert Collins wrote:
> The wheel has been removed from PyPI and anyone installing testtools
> 1.7.0 now will install from source which works fine.

I noticed the centos7 job failed with the source version.

The failing job was [1] where the back-trace looks like ~45 songs on
Python's greatest-hits album (pasted below).  The next run [2] it got
the 1.7.1 wheel and "just worked".

Maybe this jumps out at someone as a known issue ...

-i

[1] 
http://logs.openstack.org/49/163249/1/check/check-tempest-dsvm-centos7/8dceac8/logs/devstacklog.txt.gz
[2] 
http://logs.openstack.org/49/163249/1/check/check-tempest-dsvm-centos7/f3b86d5/logs/devstacklog.txt.gz

---
Colecting testtools>=0.9.22 (from 
fixtures>=0.3.14->oslo.concurrency>=1.4.1->keystone==2015.1.dev395)
Downloading 
http://pypi.IAD.openstack.org/packages/source/t/testtools/testtools-1.7.0.tar.gz
 (202kB)

 Installed 
/tmp/easy_install-mV2rSm/unittest2-1.0.0/.eggs/traceback2-1.4.0-py2.7.egg

 Installed 
/tmp/easy_install-mV2rSm/unittest2-1.0.0/.eggs/linecache2-1.0.0-py2.7.egg
 /usr/lib/python2.7/site-packages/setuptools/dist.py:291: UserWarning: The 
version specified (<__main__.late_version instance at 0x34654d0>) is an invalid 
version, this may not work as expected with newer versions of setuptools, pip, 
and PyPI. Please see PEP 440 for more details.
   "details." % self.metadata.version
 Traceback (most recent call last):
   File "", line 20, in 
   File "/tmp/pip-build-aGC1zC/testtools/setup.py", line 92, in 
 setup_requires=deps,
   File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
 _setup_distribution = dist = klass(attrs)
   File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in 
__init__
 self.fetch_build_eggs(attrs['setup_requires'])
   File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 310, in 
fetch_build_eggs
 replace_conflicting=True,
   File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 799, 
in resolve
 dist = best[req.key] = env.best_match(req, ws, installer)
   File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 
1049, in best_match
 return self.obtain(req, installer)
   File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 
1061, in obtain
 return installer(requirement)
   File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 377, in 
fetch_build_egg
 return cmd.easy_install(req)
   File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", 
line 620, in easy_install
 return self.install_item(spec, dist.location, tmpdir, deps)
   File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", 
line 650, in install_item
 dists = self.install_eggs(spec, download, tmpdir)
   File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", 
line 835, in install_eggs
 return self.build_and_install(setup_script, setup_base)
   File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", 
line 1063, in build_and_install
 self.run_setup(setup_script, setup_base, args)
   File "/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py", 
line 1049, in run_setup
 run_setup(setup_script, args)
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 240, in 
run_setup
 raise
   File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
 self.gen.throw(type, value, traceback)
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 193, in 
setup_context
 yield
   File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
 self.gen.throw(type, value, traceback)
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 164, in 
save_modules
 saved_exc.resume()
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 139, in 
resume
 compat.reraise(type, exc, self._tb)
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 152, in 
save_modules
 yield saved
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 193, in 
setup_context
 yield
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 237, in 
run_setup
 DirectorySandbox(setup_dir).run(runner)
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 267, in 
run
 return func()
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 236, in 
runner
 _execfile(setup_script, ns)
   File "/usr/lib/python2.7/site-packages/setuptools/sandbox.py", line 46, in 
_execfile
 exec(code, globals, locals)
   File "/tmp/easy_install-mV2rSm/unittest2-1.0.0/setup.py", line 87, in 

 'testtools.tests.matchers',
   File "/usr/lib64/python2.7/distutils/core.py", line 152, in setup
 dist.run_commands()
   File "/usr/lib64/python2.7/distutils/dist.py", line 953, in run_commands
 self.run_command(cmd)
   File "/usr/lib64/python2.7/distutils/dist.py", line 971, 

Re: [openstack-dev] [all] Testtools 1.7.0 may error if you installed it before reading this email

2015-03-11 Thread Ian Wienand
On 03/12/2015 10:37 AM, Ian Wienand wrote:
> File "/tmp/easy_install-mV2rSm/unittest2-1.0.0/unittest2/case.py", line 
> 16, in 
>   ImportError: cannot import name range
>   Complete output from command python setup.py egg_info:

OK, so this suggests the version of "six" is wrong.

unittest2 does have an uncapped dependency [1]

looking at six, "range" was added between 1.3.0 & 1.4.0

---
changeset:   86:cffb4c1e3ab3
user:Benjamin Peterson 
date:Fri Apr 19 13:46:44 2013 -0400
summary: add six.moves.range alias (fixes #24)

1.4.0140:d5425164c2d9
1.3.0 83:2f26b0b44e7e
---

and yes, centos7 ships with six 1.3 packaged.

Now, keystone *was* intending to upgrade this

---
2015-03-11 01:17:15.176 | Collecting six>=1.9.0 (from keystone==2015.1.dev395)
---

but it hadn't got there yet.  Filed [2].

I think the wheel works because pip gets to it after six has been
upgraded.

Even then, I'm not sure that would fix it.  I've had similar issues
before; I still don't fully understand why but "pip install a b" does
*not* install a then b, apparently by design [2].  This deep in I'm
not sure how the dependencies will order.

-i

[1] https://hg.python.org/unittest2/file/459137d78c16/setup.py
[2] https://code.google.com/p/unittest-ext/issues/detail?id=94
[3] https://github.com/pypa/pip/issues/2473

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-23 Thread Ian Wienand

On 03/23/2015 09:20 PM, Deepak Shetty wrote:

Hi all,
   I was wondering if there was a neat way to override the settings file
present in the devstack plugin stackforge project.

For eg: stackforge/devstack-plugin-glusterfs

I plan to use `enable_plugin glusterfs ` in my local to setup
GlusterFS backend for openstack

But I am forced to use the settings that the above repo has.


Can you explain more what you mean?  The glusterfs plugin should have
access to anything defined by the local.conf?

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-24 Thread Ian Wienand
On 03/24/2015 03:17 PM, Deepak Shetty wrote:
> For eg: Look at [1]
> [1] 
> https://github.com/stackforge/devstack-plugin-glusterfs/blob/master/devstack/settings

> I would like ability to change these while I use the enable_plugin
> apporach to setup devstack w/ GlusterFS per my local glusterfs setup

So I think the plugin should do

CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-glusterfs:glusterfs,lvm:lvm1}

i.e. provide a default only if the variable is unset.

This seems like one of those "traps for new players" and is one
concern I have with devstack plugins -- that authors keep having to
find out lessons learned independently.  I have added a note on this
to the documentation in [1].

-i

[1] https://review.openstack.org/#/c/167375/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Overriding settings file for devstack plugin

2015-03-25 Thread Ian Wienand

On 03/25/2015 09:28 PM, Sean Dague wrote:

I would instead do the following:
1) CINDER_ENABLED_BACKENDS+=,glusterfs:glusterfs


This is what I was about to suggest.  I'd be willing to believe
ordering could still get tangled depending on exactly what you want --
I think at that point best to follow up in a bug and we can pull apart
the specifics.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-03-25 Thread Ian Wienand

Hi,

We've been having an issue with Centos 7 jobs and the host running
out of memory.  After some investigation; one likely angle seems to be
the memory usage by swift.

See [1] for some more details; but the short story is that the various
swift processes -- even just sitting around freshly installed from
devstack before anything happens -- take up twice as much space on
centos as ubuntu

--- swift (% total system memory) ---
ubuntu  6.6%
centos  12%

In general memory usage is higher on centos, but this one is an
outlier.  Unfortunately, the main difference between the two appears
to be heap allocations (see comments in [1]) which doesn't give a lot
of clues.

The end result is that the host ends up just plain running out of
memory; the OOM killer kicks in and then everything starts
collapsing. I had the host sending me telemetry while it was running;
the last entry before things fell over was [2] and we can see that
it's not just one thing that comes along and sucks up memory, but
death by a thousand cuts.  I think the Centos 7 job is struggling to
fit into the 8gb available so we're susceptible to finding memory
issues first.

Any swift people have some ideas on this?

Thanks

-i

[1] https://etherpad.openstack.org/p/oom-in-rax-centos7-CI-job
[2] http://paste.openstack.org/show/196769/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-03-26 Thread Ian Wienand
On 03/26/2015 04:07 PM, Ian Wienand wrote:
> See [1] for some more details; but the short story is that the various
> swift processes -- even just sitting around freshly installed from
> devstack before anything happens -- take up twice as much space on
> centos as ubuntu
> 
> --- swift (% total system memory) ---
> ubuntu  6.6%
> centos  12%

So after more investigation, it turns out that pyOpenSSL has rewritten
itself in python; necessitating dependencies on the "cryptography"
package and cffi & pycparser [1].  Examining the heap shows where the
memory has gone missing :

Partition of a set of 205366 objects. Total size = 30969040 bytes.
 Index  Count   % Size   % Cumulative  % Kind (class / dict of class)
 0  67041  33  5712560  18   5712560  18 str
 1  10260   5  2872800   9   8585360  28 dict of pycparser.plyparser.Coord
 2  27765  14  2367552   8  10952912  35 tuple
 3   1215   1  2246760   7  13199672  43 dict (no owner)
 4   1882   1  1972336   6  15172008  49 dict of pycparser.c_ast.Decl
 5  16085   8  1736232   6  16908240  55 list
 6360   0  1135296   4  18043536  58 dict of module
 7   4041   2  1131480   4  19175016  62 dict of pycparser.c_ast.TypeDecl
 8   4021   2  1125880   4  20300896  66 dict of 
pycparser.c_ast.IdentifierType
 9   6984   3   893952   3  21194848  68 types.CodeType
<413 more rows. Type e.g. '_.more' to view.>

If I reinstall the packaged version of pyOpenSSL, all that drops out
and we're back to a more reasonable usage

Partition of a set of 95591 objects. Total size = 12500080 bytes.
 Index  Count   % Size   % Cumulative  % Kind (class / dict of class)
 0  45837  48  3971040  32   3971040  32 str
 1  22843  24  1943416  16   5914456  47 tuple
 2298   0   978160   8   6892616  55 dict of module
 3   6065   6   776320   6   7668936  61 types.CodeType
 4551   1   742184   6   8411120  67 dict (no owner)
 5805   1   725520   6   9136640  73 type
 6   5876   6   705120   6   9841760  79 function
 7805   1   666232   5  10507992  84 dict of type
 8289   0   279832   2  10787824  86 dict of class
 9152   0   159296   1  10947120  88 dict of pkg_resources.Distribution
<310 more rows. Type e.g. '_.more' to view.>

The end result of this is that swift-* processes go from consuming
about 6% of a CI VM's 8gb to 12%.  This 500mb is enough to push the
host into OOM when tempest gets busy.  For more see [2].  a workaround is [3]

I'll spend a bit more time on this -- I haven't determined if it's
centos or swift specific yet -- but in the mean-time, beware of
recent pyOpenSSL

-i

[1] 
https://github.com/pyca/pyopenssl/commit/fd193a2f9dd8be80d9f42d8dd8068de5f5ac5e67
 
[2] https://etherpad.openstack.org/p/oom-in-rax-centos7-CI-job
[3] https://review.openstack.org/#/c/168217/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-03-27 Thread Ian Wienand

On 03/27/2015 08:47 PM, Alan Pevec wrote:

But how come that same recent pyOpenSSL doesn't consume more memory
on Ubuntu?


Because we don't use it in CI; I believe the packaged version is
installed before devstack runs on our ubuntu CI vm's.  It's probably a
dependency of some base package there, or something we've
pre-installed.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-03-31 Thread Ian Wienand

On 03/27/2015 08:47 PM, Alan Pevec wrote:

But how come that same recent pyOpenSSL doesn't consume more memory on Ubuntu?


Just to loop back on the final status of this ...

pyOpenSSL 0.14 does seem to use about an order of magnitude more
memory than 0.13 (2mb -> 20mb).  For details see [1].

This is due to the way it now goes through "cryptography" (the
package, not the concept :) which binds to openssl using cffi.  This
ends up parsing a bunch of C to build up the ABI representation, and
it seems pycparser's model of this consumes most of the memory [2].
If that is a bug or not remains to be seen.

Ubuntu doesn't notice this in our CI environment because it comes with
python-openssl 0.13 pre-installed in the image.  Centos started
hitting this when I merged my change to start using as many libraries
from pip as possible.

I have a devstack workaround for centos out (preinstall the package)
[3] and I think a global solution of avoiding it in requirements [4]
(reviews appreciated).

I'm also thinking about how we can better monitor memory usage for
jobs.  Being able to see exactly what change pushed up memory usage by
a large % would have made finding this easier.  We keep some overall
details for devstack runs in a log file, but there is room to do
better.

-i

[1] https://etherpad.openstack.org/p/oom-in-rax-centos7-CI-job
[2] https://github.com/eliben/pycparser/issues/72
[3] https://review.openstack.org/168217
[4] https://review.openstack.org/169596

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-04-01 Thread Ian Wienand

Note; I haven't finished debugging the glusterfs job yet.  This
relates to the OOM that started happening on Centos after we moved to
using as much pip-packaging as possible.  glusterfs was still failing
even before this.

On 04/01/2015 07:58 PM, Deepak Shetty wrote:

1) So why did this happen on rax VM only, the same (Centos job)on hpcloud
didn't seem to hit it even when we ran hpcloud VM with 8GB memory.


I am still not entirely certain that hp wasn't masking the issue when
we were accidentally giving hosts 32gb RAM.  We can get back to this
once these changes merge.


2) Should this also be sent to centos-devel folks so that they don't
upgrade/update the pyopenssl in their distro repos until the issue
is resolved ?


I think let's give the upstream issues a little while to play-out,
then we decide our next steps around use of the library based on that
information.

thanks

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-01 Thread Ian Wienand

On 04/02/2015 09:02 AM, Jeremy Stanley wrote:

but since parties who don't understand our mostly non-hierarchical
community can see those sets of access controls, they cling to them
as a sign of importance and hierarchy of the people listed within.


There is no hierarchy for submitting code -- that is good.  We all know
situations in a traditional company where people say "that's foo's
area, we don't work on that".

Once code is submitted, there *is* a hierarchy.  The only way
something gets merged in OpenStack is by Brownian motion of this
hierarchy.  These special "cores" float around and as a contributor
you just hope that two of them meet up and decide your change is
ready.  You have zero insight into when this might happen, if at all.
The efficiency is appalling but somehow we get there in the end.

IMO requiring two cores to approve *every* change is too much.  What
we should do is move the responsibility downwards.  Currently, as a
contributor I am only 1/3 responsible for my change making it through.
I write it, test it, clean it up and contribute it; then require the
extra 2/3 to come from the "hierarchy".  If you only need one core,
then core and myself share the responsibility for the change.  In my
mind, this better recognises the skill of the contributor -- we are
essentially saying "we trust you".

People involved in openstack are not idiots.  If a change is
controversial, or a reviewer isn't confident, they can and will ask
for assistance or second opinions.  This isn't a two-person-key system
in a nuclear missile silo; we can always revert.

If you want cores to be "less special" then talking about it or
calling them something else doesn't help -- the only way is to make
them actually less special.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit downtime and upgrade on Saturday 2015-05-09 at 1600 UTC

2015-05-10 Thread Ian Wienand
On 05/10/2015 05:20 AM, James E. Blair wrote:
> If you encounter any problems, please let us know here or in
> #openstack-infra on Freenode.

One minor thing is that after login you're redirected to
"https://review.openstack.org//"; (note double //, which then messes up
various relative-links when trying to use gerrit).

I think this is within gerrit's openid provider; I don't see any
obvious problems outside that.  When I watch with firebug, Gerrit
sends me off on a 401 page to launchpad with a form it POSTs -- this
has a good-looking return_to

https://review.openstack.org/OpenID?gerrit.mode=SIGN_IN";>

I go through the login, and get a 302 from

 https://login.launchpad.net/NUxdqUgd7EbI5Njo/+decide

which correctly 302's me back to the return_to

 
https://review.openstack.org/OpenID?gerrit.mode=SIGN_IN&openid.assoc_handle=[...
 openid stuff follows ...]

Which then further 302's back to

  Location: https://review.openstack.org//

Which is followed to the incorrect URL.  So yeah, something within the
OpenID end-point I think.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Ian Wienand

On 03/02/2017 01:13 AM, Doug Hellmann wrote:

Tony identified caps in 5 OpenStack community projects (see [1]) as well
as powervm and python-jsonpath-rw-ext. Pull requests to those other
projects are linked from the bug [2].



[1] https://review.openstack.org/#/q/topic:bug/1668848


Am I nuts or was pbr itself the only one forgotten?

I filed

 https://review.openstack.org/#/c/440010

under the same topic

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-15 Thread Ian Wienand

On 03/16/2017 08:22 AM, Ben Nemec wrote:

Anyway, I don't know that anything is broken at the moment since I
believe dib-run-parts was brought over unchanged, but the retirement of
dib-utils was proposed in https://review.openstack.org/#/c/445617 and I
would like to resolve this question before we do anything like that.


The underlying motivation behind this was to isolate dib so we could
do things like re-implement dib-run-parts in posixy shell (for busybox
environments) or python, etc.

So my idea was we'd just leave dib-utils alone.  But it raises a good
point that both dib-utils and diskimage-builder are providing
dib-run-parts.  I think this is probably the main oversight here.

I've proposed [1] that makes dib use dib-run-parts from its private
library dir (rather than any globally installed version) and stops it
exporting the script to avoid conflict with dib-utils.  I think this
should allow everything to live in harmony?

-i

[1] https://review.openstack.org/#/c/446285/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-16 Thread Ian Wienand
On 03/17/2017 03:46 AM, Steven Hardy wrote:
> (undercloud) [stack@undercloud ~]$ rpm -qf /usr/bin/dib-run-parts
> dib-utils-0.0.11-1.el7.noarch
> (undercloud) [stack@undercloud ~]$ rpm -qf /bin/dib-run-parts
> dib-utils-0.0.11-1.el7.noarch

/bin is a link to /usr/bin?  So I think this is the same and this is
the dib-run-parts as pacakged by dib-utils.

> (undercloud) [stack@undercloud ~]$ rpm -qf 
> /usr/lib/python2.7/site-packages/diskimage_builder/lib/dib-run-parts
> diskimage-builder-2.0.1-0.20170314023517.756923c.el7.centos.noarch

This is dib's "private" copy.  As I mentioned in the other mail, the
intention was to vendor this so we could re-write for dib-specific
needs if need be (given future requirements such as running in
restricted container environments).  I believe having dib exporting
this was an (my) oversight.  I have proposed [1] to remove this.

> (undercloud) [stack@undercloud ~]$ rpm -qf /usr/local/bin/dib-run-parts
> file /usr/local/bin/dib-run-parts is not owned by any package

This would come from the image build process.  We copy dib-run-parts
into the chroot to run in-target scripts [2] but we never actually
remove it.  This seems to me to also be a bug and I've proposed [3] to
run this out of /tmp and clean it up.

> But the exciting thing from a rolling-out-bugfixes perspective is that the
> one actually running via o-r-c isn't either of the packaged versions (doh!)
> so we probably need to track down which element is installing it.
>
> This is a little OT for this thread (sorry), but hopefully provides more
> context around my concerns about creating another fork etc.

I don't want us to get a little too "left-pad" [4] with this 95ish
lines of shell :) I think this stack clears things up.

tl;dr

 - dib version should be vendored; not in path & not exported [1]
 - unnecessary /usr/local/bin version removed [3]
 - dib-utils provides /usr/bin/ version

Cross-ports between the vendored dib version and dib-utils should be
trivial given what it is.  If dib wants to rewrite it's vendored
version, or remove it completely, this will not affect anyone
depending on dib-utils.

Thanks,

-i

[1] https://review.openstack.org/446285 (dib: do not provide dib-run-parts)
[2] 
https://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/elements/dib-run-parts/root.d/90-base-dib-run-parts
[3] https://review.openstack.org/446769 (dib: run chroot dib-run-parts out of 
/tmp)
[4] http://left-pad.io/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib] diskimage-builder v2 RC1 release; request for test

2017-03-20 Thread Ian Wienand

On 03/21/2017 03:10 AM, Mikhail Medvedev wrote:

On Fri, Mar 17, 2017 at 3:23 PM, Andre Florath  wrote:
Submitted the bug https://bugs.launchpad.net/diskimage-builder/+bug/1674402


Thanks; some updates there.


Would adding a third-party CI job help? I can put together a
functional job on ppc64. I assume we want a job based on
gate-dib-dsvm-functests-*?


As discussed in #openstack-dib we have this reporting on a group of
the functional tests.  My only concern is biting off more than we can
chew initially and essentially training people that the results are
unreliable.  Once we get over this initial hurdle we can look at
expanding it and voting.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Your next semi weekly gate status report

2017-03-29 Thread Ian Wienand

On 03/28/2017 08:57 AM, Clark Boylan wrote:

1. Libvirt crashes: http://status.openstack.org/elastic-recheck/#1643911
and http://status.openstack.org/elastic-recheck/#1646779



Libvirt is randomly crashing during the job which causes things to fail
(for obvious reasons). To address this will likely require someone with
experience debugging libvirt since it's most likely a bug isolated to
libvirt. We're looking for someone familiar with libvirt internals to
drive the effort to fix this issue,


Ok, from the bug [1] we're seeing malloc() corruption.

While I agree that a coredump is not that likely to help, I would also
like to come to that conclusion after inspecting a coredump :) I've
found things in the heap before that give clues as to what real
problems are.

To this end, I've proposed [2] to keep coredumps.  It's a little
hackish but I think gets the job done. [3] enables this and saves any
dumps to the logs in d-g.

As suggested, running under valgrind would be great but probably
impractical until we narrow it down a little.  Another thing I've had
some success with is electric fence [4] which puts boundaries around
allocations so out-of-bounds access hits at the time of access.  I've
proposed [5] to try this out, but it's not looking particularly
promising unfortunately.  I'm open to suggestions, for example maybe
something like tcalloc might give us a different failure and could be
another clue.  If we get something vaguely reliable here, our best bet
might be to run a parallel non-voting job on all changes to see what
we can pick up.

-i

[1] https://bugs.launchpad.net/nova/+bug/1643911
[2] https://review.openstack.org/451128
[3] https://review.openstack.org/451219
[4] http://elinux.org/Electric_Fence
[5] https://review.openstack.org/451136

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Switching gate testing to use Ubuntu Cloud Archive

2017-04-03 Thread Ian Wienand
On 04/04/2017 09:06 AM, Clark Boylan wrote:
> I have pushed a change to devstack [3] to enable using UCA which pulls
> in new Libvirt and mostly seems to work. I think we should consider
> switching to UCA as this may fix our Libvirt problems and if it doesn't,
> we will be closer to a version of Libvirt that upstream should be
> willing to fix.

I'm not 100% sure where this leaves the
devstack-plugin-additional-pkg-repos work [1]; although to be honest
I've always thought that was a little unspecific.  We also have
"devstack-plugin-tar-installer" [2] which was working on installing
libvirt from upstream tarballs IIRC but looks dead?  There was quite
some discussion about this in [3] at the time.

> Finally it is worth noting that we will get newer packages of other
> software as well, most notably openvswitch will be version 2.6.1 instead
> of 2.5.0.

Just to write down the centos side of things -- we have the RDO repo
installed on all centos CI nodes.  That brings in the centos virt-sig
rebuild of qemu (qemu-kvm-ev) which is definitely required to run
trunk nova [4].  We are just using libvirt from standard updates
(2.0.0 based).  RDO provides openvswitch, which is 2.6.1 too.

> Then have specific jobs (like devstack) explicitly opt into the UCA
> repo appropriate for them (if any).

The idea being, presumably, that when devstack branches, it is
essentially pinned to whatever version of UCA it was using at the time
for its lifespan.

I do wonder if installing the latest version on the base images
simplifies things (so that by default, "apt-get install libvirt" in
any context gets the UCA version).  To handle the above, a devstack
branch would have to essentially do the opposite -- override the
default to whatever version it is pinned to.  More work at branch
point, however has the advantage that we don't have to constantly
communicate to people they need to "opt-in".

-i

[1] 
https://git.openstack.org/cgit/openstack/devstack-plugin-additional-pkg-repos
[2] https://git.openstack.org/cgit/openstack/devstack-plugin-tar-installer
[3] https://review.openstack.org/#/c/108714/
[4] http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Stop enabling EPEL mirror by default

2017-04-04 Thread Ian Wienand
On 04/05/2017 03:02 AM, Paul Belanger wrote:
> Recently we've been running into some issues keeping our EPEL mirror
> properly sync'd. We are working to fix this, however we'd also like
> to do the following:

>   Stop enabling EPEL mirror by default
>   https://review.openstack.org/#/c/453222/

> For the most part, we enable EPEL for our image build process, this
> to install haveged.  However, it is also likely the majority of
> centos-7 projects don't actually need EPEL.  I know specifically
> both RDO and TripleO avoid using the EPEL repository because of how
> unstable it is.

I agree this is the step to turn it off in our gate, but I've been
trying to excise this so we move to a white-list method during builds,
which is more complicated.  This needs to be done, however, so that
3rd party CI who don't use our mirror scripts don't get EPEL hanging
around from the build too.

I'd appreciate reviews

Firstly, we need to ensure the image build EPEL dependencies we have
are flexible to changes in default status.

 * https://review.openstack.org/439294 : don't install ccache
 * https://review.openstack.org/439911 : allow "--enablerepo" options
 for haveged install
 * https://review.openstack.org/439917 : install haveged from EPEL

Then we need a way to install EPEL, but disabled, during image builds

 * https://review.openstack.org/439926 : Add flag to disable EPEL

Then stop installing EPEL as part of the puppet install, and switch to
installing it from dib in disabled state

 * https://review.openstack.org/453322 : Add epel element (with disabled flag)
 * https://review.openstack.org/439248 : Don't install EPEL during puppet

At this point, our base images should be coming up with only
whitelisted EPEL packages (haveged, unless I find anything else I've
missed) and the repo disabled.

-i

p.s. tangential; but related is

 *  https://review.openstack.org/453325 : use Centos openstack repos, not RDO

This could probably be also moved into DIB as an element, if
we feel strongly about it (or infra-package-needs ... but I wasn't
100% sure that's early enough yet)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [diskimage-builder] Restarting bi-weekly meeting

2017-05-25 Thread Ian Wienand

Hi,

We've let this meeting [1] lapse, to our communications detriment.  I
will restart it, starting next week [2].  Of course agenda items are
welcome, otherwise we will use it as a session to make sure patches
are moving in the right direction.

If the matter is urgent, and not getting attention, an agenda item in
the weekly infra meeting would be appropriate.

Ping me off list if you're interested but this time doesn't work.  If
there's a few, we can move it.

Thanks,

-i

[1] http://eavesdrop.openstack.org/#Diskimage_Builder_Team_Meeting
[2] https://review.openstack.org/468270

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] openstack-ubuntu-testing-bot -- please turn off

2017-06-08 Thread Ian Wienand

Hi,

If you know of someone in control of whatever is trying to use this
account, running on 91.189.91.27 (a canonical IP), can you please turn
it off.  It's in a tight loop failing to connect to gerrit, which
probably isn't good for either end :)

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] shutting down pholio.openstack.org

2017-06-12 Thread Ian Wienand
Hello,

We will be shutting down pholio.openstack.org in the next few days.

As discussed at the last #infra meeting [1], in short, "the times they
are a changin'" and the Pholio services have not been required.

Of course the original deployment puppet, etc, remains (see [2]), so
you may reach out to the infra team if this is required in the future.

Thanks,

-i

[1] 
http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-06-06-19.03.html
[2] https://specs.openstack.org/openstack-infra/infra-specs/specs/pholio.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] diskimage builder works for trusty but not for xenial

2017-06-21 Thread Ian Wienand

On 06/21/2017 04:44 PM, Ignazio Cassano wrote:

* Connection #0 to host cloud-images.ubuntu.com left intact
Downloaded and cached
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz,
having forced upstream caches to revalidate
xenial-server-cloudimg-amd64-root.tar.gz: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match



Are there any problems on http://cloud-images.ubuntu.com ?


There was [1] which is apparently fixed.

As Paul mentioned, the -minimal builds take a different approach and
build the image from debootstrap, rather than modifying the upstream
image.  They are generally well tested just as a side-effect of infra
relying on them daily.  You can use DIB_DISTRIBUTION_MIRROR to set
that to a local mirror and eliminate another source of instability
(however, that leaves the mirror in the final image ... a known issue.
Contributions welcome :)

-i

[1] https://bugs.launchpad.net/cloud-images/+bug/1699396


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][nova] Corrupt nova-specs repo

2017-06-29 Thread Ian Wienand
Hi,

Unfortunately it seems the nova-specs repo has undergone some
corruption, currently manifesting itself in an inability to be pushed
to github for replication.

Upon examination, it seems there's a problem with a symlink and
probably jgit messing things up making duplicate files.  I have filed
a gerrit bug at [1] (although it's probably jgit, but it's just a
start).

Anyway, that leaves us the problem of cleaning up the repo into a
pushable state.  Here's my suggestion after some investigation:

The following are corrupt

---
$ git fsck
Checking object directories: 100% (256/256), done.
error in tree a494151b3c661dd9b6edc7b31764a2e2995bd60c: contains duplicate file 
entries
error in tree 26057d370ac90bc01c1cfa56be8bd381618e2b3e: contains duplicate file 
entries
error in tree 57423f5165f0f1f939e2ce141659234cbb5dbd4e: contains duplicate file 
entries
error in tree 05fd99ef56cd24c403424ac8d8183fea33399970: contains duplicate file 
entries
---

After some detective work [2], I related all these bad objects to the
refs that hold them.  It look as follows

---
fsck-bad: a494151b3c661dd9b6edc7b31764a2e2995bd60c
 -> 5fa34732b45f4afff3950253c74d7df11b0a4a36 refs/changes/26/463526/9

fsck-bad: 26057d370ac90bc01c1cfa56be8bd381618e2b3e
 -> 47128a23c2aad12761aa0df5742206806c1dfbb8 refs/changes/26/463526/8
 -> 7cf8302eb30b722a00b4d7e08b49e9b1cd5aacf4 refs/changes/26/463526/7
 -> 818dc055b971cd2b78260fd17d0b90652fb276fb refs/changes/26/463526/6

fsck-bad: 57423f5165f0f1f939e2ce141659234cbb5dbd4e

 -> 25bd72248682b584fb88cc01061e60a5a620463f refs/changes/26/463526/3
 -> c7e385eaa4f45b92e9e51dd2c49e799ab182ac2c refs/changes/26/463526/4
 -> 4b8870bbeda2320564d1a66580ba6e44fbd9a4a2 refs/changes/26/463526/5

fsck-bad: 05fd99ef56cd24c403424ac8d8183fea33399970
 -> e8161966418dc820a4499460b664d87864c4ce24 refs/changes/26/463526/2
---

So you may notice this is refs/changes/26/463526/[2-9]

Just deleting these refs and expiring the objects might be the easiest
way to go here, and seems to get things purged and fix up fsck

---
$ for i in `seq 2 9`; do
>  git update-ref -d refs/changes/26/463526/$i
> done

$ git reflog expire --expire=now --all && git gc --prune=now --aggressive
Counting objects: 44756, done.
Delta compression using up to 16 threads.
Compressing objects: 100% (43850/43850), done.
Writing objects: 100% (44756/44756), done.
Total 44756 (delta 31885), reused 12846 (delta 0)

$ git fsck
Checking object directories: 100% (256/256), done.
Checking objects: 100% (44756/44756), done.
---

I'm thinking if we then force push that to github, we're pretty much
OK ... a few intermediate reviews will be gone but I don't think
they're important in this context.

I had a quick play with "git ls-tree", edit the file, "git mktree",
"git replace" and then trying to use filter-branch, but couldn't get
it to work.  Suggestions welcome; you can play with the repo from [1]
I would say.

Thanks,

-i

[1] https://bugs.chromium.org/p/gerrit/issues/detail?id=6622
[2] "git log --all --format=raw --raw -t --no-abbrev" and search for
the change sha, then find it in "git show-refs"

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][devstack] DIB builds after mysql.qcow2 removal

2017-07-17 Thread Ian Wienand
Hi,

The removal of the mysql.qcow2 image [1] had a flow-on effect noticed
first by Paul in [2] that the tools/image_list.sh "sanity" check was
not updated, leading to DIB builds failing in a most unhelpful way as
it tries to cache the images for CI builds.

So while [2] fixes the problem; one complication here is that the
caching script [3] loops through the open devstack branches and tries
to collect the images to cache.

Now it seems we hadn't closed the liberty or mitaka branches.  This
causes a problem, because the old branches refer to the old image, but
we can't actually commit a fix to change them because the branch is
broken (such as [4]).

I have taken the liberty of EOL-ing stable/liberty and stable/mitaka
for devstack.  I get the feeling it was just forgotten at the time.
Comments in [4] support this theory.  I have also taken the liberty of
approving backports of the fix to newton and ocata branches [5],[6].

A few 3rd-party CI people using dib have noticed this failure.  As the
trio of [4],[5],[6] move through, your builds should start working
again.

Thanks,

-i

[1] https://review.openstack.org/482600
[2] https://review.openstack.org/484001
[3] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/cache-devstack/extra-data.d/55-cache-devstack-repos
[4] https://review.openstack.org/482604
[5] https://review.openstack.org/484299
[6] https://review.openstack.org/484298

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] DIB builds after mysql.qcow2 removal

2017-07-17 Thread Ian Wienand

On 07/18/2017 10:01 AM, Tony Breeds wrote:

It wasn't forgotten as suchi, there are jobs still using it/them.  If
keeping the branches around cuases bigger probelsm then EOLing them is
fine.  I'll try to generate a list of the affected projects/jobs and
turn them off.


Thanks; yeah this was pointed out to me later.

I think any jobs can use the -eol tag, rather than the
branch if required (yes, maybe easier said than done depending on how
many layers of magic there are :).  There doesn't seem to be much
point in branches we can't commit to due to broken CI, and I doubt
anyone is keen to maintain it.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][infra] Help needed! high gate failure rate

2017-08-10 Thread Ian Wienand
On 08/10/2017 06:18 PM, Rico Lin wrote:
> We're facing a high failure rate in Heat's gates [1], four of our gate
> suffering with fail rate from 6 to near 20% in 14 days. which makes most of
> our patch stuck with the gate.

There have been a confluence of things causing some problems recently.
The loss of OSIC has distributed more load over everything else, and
we have seen an increase in job timeouts and intermittent networking
issues (especially if you're downloading large things from remote
sites).  There have also been some issues with the mirror in rax-ord
[1]

> gate-heat-dsvm-functional-convg-mysql-lbaasv2-ubuntu-xenial(19.67%)
> gate-heat-dsvm-functional-convg-mysql-lbaasv2-non-apache-ubuntu-xenia(9.09%)
> gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial(8.47%)
> gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial(6.00%)

> We still try to find out what's the cause but (IMO,) seems it might be some
> thing wrong with our infra. We need some help from infra team, to know if
> any clue on this failure rate?

The reality is you're just going to have to triage this and be a *lot*
more specific with issues.  I find opening an etherpad and going
through the failures one-by-one helpful (e.g. I keep [2] for centos
jobs I'm interested in).

Looking at the top of the console.html log you'll have the host and
provider/region stamped in there.  If it's timeouts or network issues,
reporting to infra the time, provider and region of failing jobs will
help.  If it's network issues similar will help.  Finding patterns is
the first step to understanding what needs fixing.

If it's due to issues with remote transfers, we can look at either
adding specific things to mirrors (containers, images, packages are
all things we've added recently) or adding a caching reverse-proxy for
them ([3],[4] some examples).

Questions in #openstack-infra will usually get a helpful response too

Good luck :)

-i

[1] https://bugs.launchpad.net/openstack-gate/+bug/1708707/
[2] https://etherpad.openstack.org/p/centos7-dsvm-triage
[3] https://review.openstack.org/491800
[4] https://review.openstack.org/491466

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra]Plan to support Python 3.6 ?

2017-08-10 Thread Ian Wienand

On 08/09/2017 11:25 PM, ChangBo Guo wrote:

We received Python 3.6 related Bug recently [1][2]. That let me think
what's the plan to support Python 3.6 for OpenStack in the future.   Python
3.6 was released on December 23, 2016, has some different behaviors from
Python 3.5[3]. talked with cdent in the IRC, would like to discuss this
through mailing list , and suggest a discussion at the PTG[3]

1. what's the time to support Python 3.6 ?

2. what 's the  plan or process ?


If you really want to live on the edge, Fedora 26 CI nodes are
available and include Python 3.6.  I'll be happy to help if you're not
familiar with using different nodes for jobs, or with any issues.

I've had devstack going in experimental successfully.  I probably
wouldn't recommend throwing it in as a voting gate job straight away,
but everything should be there :)

-i


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Python 3.6 testing is available on Fedora 26 nodes

2017-08-24 Thread Ian Wienand
Hello,

In a recent discussion [1] I mentioned we could, in theory, use Fedora
26 for Python 3.6 testing (3.6.2, to be exact).  After a few offline
queries we have put theory into practice, sorted out remaining issues
and things are working.

For unit testing (tox), you can use the
'gate-{name}-python36-{node}-nv' job template with fedora-26 nodes.
For an example, see [2] (which, I'm happy to report, found a real
issue [3] :).  You may need to modify your bindep.txt files to install
correct build pacakges for RPM platforms; in terms of general
portability this is probably not a bad thing anyway.

I have an up/down devstack test working with a minor change [4].  I
will work on getting this more stable and more complete, but if this
is of interest, reach out.  In general, I track the centos & fedora
jobs fairly closely at [5] to try and keep up with any systemic
issues.

Although it is not exactly trivial, there is fairly complete
instructions within [6] to help build a Fedora image that looks like
the infra ones for testing purposes.  You can also reach out and we
can do things like place failed nodes on hold if there are hard to
debug issues.

Thanks,

-i

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-August/120888.html
[2] 
https://git.openstack.org/cgit/openstack-infra/project-config/commit/?id=5fe3ba95616136709a319ae1cd3beda38a299a13
[3] https://review.openstack.org/496054
[4] https://review.openstack.org/496098
[5] http://people.redhat.com/~iwienand/devstack-status/
[6] 
https://git.openstack.org/cgit/openstack-infra/project-config/tree/tools/build-image.sh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [diskimage-builder] Does anyone use "fedora" target?

2017-08-29 Thread Ian Wienand

Hi,

The "fedora" element -- the one that downloads the upstream .qcow2 and
re-packages it -- is currently broken as the links we use have
disappeared [1].  Even allowing for this, it's still broken with some
changes to the kernel install scripts [2].  AFAICT, the only thing
noticing this is our CI.

fedora-minimal takes a different approach of building the system
within a blank chroot.  It's what we use to create the upstream
images.

I believe the octavia jobs switched to fedora-minimal?

Is there anyone still using these image-based jobs?  Is there any
reason why you can't use fedora-minimal?  I don't really see this as
being that useful, and our best path forward might be just to retire
it.

Thanks,

-i

[1] https://review.openstack.org/497734
[2] https://bugs.launchpad.net/diskimage-builder/+bug/1713381

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Ian Wienand

On 09/19/2017 11:03 PM, Jeremy Stanley wrote:

On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]

The jobs does 120..220 sec apt-get install and packages defined
/files/debs/general are missing from the images before starting the job.



Is the time spent at this stage mostly while downloading package
files (which is what that used to alleviate) or is it more while
retrieving indices or installing the downloaded packages (things
having them pre-retrieved on the images never solved anyway)?


As you're both aware, but others may not be, at the end of the logs
devstack does keep a timing overview that looks something like

=
DevStack Component Timing
=
Total runtime1352

run_process   15
test_with_retry4
apt-get-update 2
pip_install  270
osc  365
wait_for_service  29
dbsync23
apt-get  137
=

That doesn't break things down into download v install, but apt does
have download summary that can be grepped for

---
$ cat devstacklog.txt.gz | grep Fetched
2017-09-19 17:52:45.808 | Fetched 39.3 MB in 1s (26.3 MB/s)
2017-09-19 17:53:41.115 | Fetched 185 kB in 0s (3,222 kB/s)
2017-09-19 17:54:16.365 | Fetched 23.5 MB in 1s (21.1 MB/s)
2017-09-19 17:54:25.779 | Fetched 18.3 MB in 0s (35.6 MB/s)
2017-09-19 17:54:39.439 | Fetched 59.1 kB in 0s (0 B/s)
2017-09-19 17:54:40.986 | Fetched 2,128 kB in 0s (40.0 MB/s)
2017-09-19 17:57:37.190 | Fetched 333 kB in 0s (1,679 kB/s)
2017-09-19 17:58:17.592 | Fetched 50.5 MB in 2s (18.1 MB/s)
2017-09-19 17:58:26.947 | Fetched 5,829 kB in 0s (15.5 MB/s)
2017-09-19 17:58:49.571 | Fetched 5,065 kB in 1s (3,719 kB/s)
2017-09-19 17:59:25.438 | Fetched 9,758 kB in 0s (44.5 MB/s)
2017-09-19 18:00:14.373 | Fetched 77.5 kB in 0s (286 kB/s)
---

As mentioned, we setup the package manager to point to a region-local
mirror during node bringup.  Depending on the i/o situation, it is
probably just as fast as coming off disk :) Note (also as mentioned)
these were never pre-installed, just pre-downloaded to an on-disk
cache area (as an aside, I don't think dnf was ever really happy with
that situation and kept being too smart and clearing it's caches).

If you're feeling regexy you could maybe do something similar with the
pip "Collecting" bits in the logs ... one idea for investigation down
that path is if we could save time by somehow collecting larger
batches of requirements and doing less pip invocations?

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Ian Wienand

On 09/20/2017 09:30 AM, David Moreau Simard wrote:

At what point does it become beneficial to build more than one image per OS
that is more aggressively tuned/optimized for a particular purpose ?


... and we can put -dsvm- in the jobs names to indicate it should run
on these nodes :)

Older hands than myself will remember even more issues, but the
"thicker" the base-image has been has traditionally just lead to a lot
more corners for corner-cases can hide in.  We saw this all the time
with "snapshot" images where we'd be based on upstream images that
would change ever so slightly and break things, leading to
diskimage-builder and the -minimal build approach.

That said, in a zuulv3 world where we are not caching all git and have
considerably smaller images, a nodepool that has a scheduler that
accounts for flavor sizes and could conceivably understand similar for
images, and where we're building with discrete elements that could
"bolt-on" things like a list-of-packages install sanely to daily
builds ... it's not impossible to imagine.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-28 Thread Ian Wienand

Hi,

There's a few issues with devstack and the new zuulv3 environment

LIBS_FROM_GIT is broken due to the new repos not having a remote
setup, meaning "pip freeze" doesn't give us useful output.  [1] just
disables the test as a quick fix for this; [2] is a possible real fix
but should be tried a bit more carefully in case there's corners I
missed.  This will be affecting other projects.

However, before we can get this in, we need to fix the gate.  The
"updown" tests have missed a couple of requirement projects due to
them setting flags that were not detected during migration.  [3] is a
fix for that and seems to work.

For some reason, the legacy-tempest-dsvm-nnet job is running against
master, and failing as nova-net is deprecated there.  I'm clutching at
straws to understand this one, as it seems like the branch filters are
setup correctly; [4] is one guess?

I'm not aware of issues other than these at this time

-i

[1] https://review.openstack.org/508344
[2] https://review.openstack.org/508366
[3] https://review.openstack.org/508396
[4] https://review.openstack.org/508405

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-28 Thread Ian Wienand

On 09/29/2017 03:37 PM, Ian Wienand wrote:

I'm not aware of issues other than these at this time


Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons.  Any debugging would be helpful,
thanks.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DIB] DIB Meetings

2017-10-05 Thread Ian Wienand

On 10/06/2017 02:19 AM, Andreas Scheuring wrote:

seems like there is some confusing information about the DIB
meetings in the wiki [1]. The meeting is alternating between 15:00
and 20:00 UTC.  But whenever the Text says 15:00 UTC, the link
points to a 20:00 UTC worldclock site and vice versa.



What is the correct meeting time? At least today 15:00 UTC no one
was there...


Sorry about that, the idea was to alternate every 2 weeks between an
EU time and a APAC/USA time.  But as you noted I pasted everything in
backwards causing great confusion :) Thanks to tonyb we're fixed up
now.


I put an item on the agenda for today's meeting but can't make 20:00
UTC today. It would be great if you could briefly discuss it and
provide feedback on the patch (it's about adding s390x support to
DIB). I'm also open for any offline discussions.


Sorry, with all going on this fell down a bit.  I'll comment there

Thanks,

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul v3 Rollout Update - devstack-gate issues edition

2017-10-11 Thread Ian Wienand
There are still significant issues

- logs issues

Should be behind us.  The logs partition ran out of inodes, causing
log upload failures.  Pruning jobs should have rectified this.

- Ubuntu package issues

You may notice a range of issues with Ubuntu packages.  The root cause
is that our mirror is behind due a broken reprepro.  Unfortunately, we
build our daily images against an external upstream mirror, so they
have been built using later packages than our un-updated region
mirrors provide, leading apt to great confusion.  Some debugging notes
on reprepro at [1], but I have to conclude the .db files are corrupt
and I have no idea how to recreate these other than to start again.

I think the most expedient solution here will be to turn /ubuntu on
mirrors into a caching reverse proxy for upstream.  However;

- system-config breakage

The system-config gate is broken due to an old pip pin with [2].
However, despite this merging several hours ago, zuulv2 doesn't seem
to want to reload to pick this up.  I have a suspicion that because it
was merged by zuulv3 maybe zuulv2 missed it?  I'm not sure, and don't
think even turning the jobs -nv will help.

- devstack-gate cache copying

This means the original devstack-gate cache issues [3] remain unmerged
at this point.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-10-12.log.html#t2017-10-12T04:04:16
[2] https://review.openstack.org/511360
[3] https://review.openstack.org/511260

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul v3 Rollout Update - devstack-gate issues edition

2017-10-12 Thread Ian Wienand

On 10/12/2017 04:28 PM, Ian Wienand wrote:

- logs issues

Should be behind us.  The logs partition ran out of inodes, causing
log upload failures.  Pruning jobs should have rectified this.


This time it's true :)  But please think about this with your jobs, and
don't upload hundreds of little files unnecessarily.


- Ubuntu package issues

You may notice a range of issues with Ubuntu packages.  The root cause
is that our mirror is behind due a broken reprepro.


Thanks to the efforts of jeblair and pabelanger, the ubuntu mirror
has been restored.  There should be no more issues relating to out
of date mirrors.


- system-config breakage


resolved


- devstack-gate cache copying


resolved

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Short gerrit / zuul outage 2017-10-20 20:00UTC

2017-10-19 Thread Ian Wienand

Hello,

We plan a short outage (<30 minutes) of gerrit and zuul on 2017-10-20
20:00UTC to facilitate project rename requests.

In flight jobs should be restarted, but if something does go missing a
"recheck" comment will work.

Thanks,

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Short gerrit / zuul outage 2017-10-20 20:00UTC

2017-10-20 Thread Ian Wienand

On 10/20/2017 03:46 PM, Ian Wienand wrote:

We plan a short outage (<30 minutes) of gerrit and zuul on 2017-10-20
20:00UTC to facilitate project rename requests.


Note this has been postponed to a future (TBD) date

Thanks,

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Distutils][pbr] Announcement: Pip 10 is coming, and will move all internal APIs

2017-10-22 Thread Ian Wienand

On 10/21/2017 07:14 AM, Clark Boylan wrote:

The current issue this change is facing can be seen at
http://logs.openstack.org/25/513825/4/check/legacy-tempest-dsvm-py35/c31deb2/logs/devstacklog.txt.gz#_2017-10-20_20_07_54_838.
The tl;dr is that for distutils installed packages (basically all the
distro installed python packges) pip refuses to uninstall them in order
to perform upgrades because it can't reliably determine where all the
files are. I think this is a new pip 10 behavior.

In the general case I think this means we can not rely on global pip
installs anymore. This may be a good thing to bring up with upstream
PyPA as I expect it will break a lot of people in a lot of places (it
will break infra for example too).


deja-vu!  pip 8 tried this and quickly reverted.  I wrote a long email
with all the details, but then figured that's not going to help much
so translated it into [1].

-i

[1] https://github.com/pypa/pip/issues/4805

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Distutils][pbr][devstack][qa] Announcement: Pip 10 is coming, and will move all internal APIs

2017-10-22 Thread Ian Wienand

On 10/22/2017 12:18 AM, Jeremy Stanley wrote:

Right, on Debian/Ubuntu it's not too terrible (cloud-init's
dependencies are usually the biggest issue there and we manage to
avoid them by building our own images with no cloud-init), but on
Red Hat derivatives there are a lot of deep operating system
internals built on top of packaged Python libraries which simply
can't be uninstalled cleanly nor safely.


Also note though, if it can be uninstalled, we have often had problems
with the packages coming back and overwriting the pip installed
version, which leads to often very obscure problems.  For this reason
in various bits of devstack/devstack-gate/dib's pip install etc we
often install and pin packages to let pip overwrite them.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Building Openstack Trove Images

2017-11-07 Thread Ian Wienand
On 11/07/2017 05:40 PM, Ritesh Vishwakarma wrote:
> as the *dib-lint* file is there instead of the mentioned
> *disk-image-create *and when executed just verifies the other
> elements.

Those instructions unfortunately look out of date for master
diskimage-builder.  I will try to get a minute to parse them and
update later.

You will probably have a lot more success installing diskimage-builder
into a virtualenv; see [1] ... then activate the virtualenv and use
disk-image-create from there.  Likely the rest will work.

If diskimage-builder is the problem, feel free to jump into
#openstack-dib (best during .au hours to catch me) and we can help.

[1] 
https://docs.openstack.org/diskimage-builder/latest/user_guide/installation.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][all] Removal of packages from bindep-fallback

2017-11-15 Thread Ian Wienand
Hello,

Some time ago we started the process of moving towards projects being
more explicit about thier binary dependencies using bindep [1]

To facilitate the transition, we created a "fallback" set of
dependencies [2] which are installed when a project does not specifiy
it's own bindep dependencies.  This essentially replicated the rather
ad-hoc environment provided by CI images before we started the
transition.

This list has acquired a few packages that cause some problems in
various situations today.  Particularly packages that aren't in the
increasing number of distributions we provide, or packages that come
from alternative repositories.

To this end, [3,4] proposes the removal of

 liberasurecode-*
 mongodb-*
 python-zmq
 redis
 zookeeper
 ruby-*

from the fallback packages.  This has a small potential to affect some
jobs that tacitly rely on these packages.

NOTE: this does *not* affect devstack jobs (devstack manages it's own
dependencies outside bindep) and if you want them back, it's just a
matter of putting them into the bindep file in your project (and as a
bonus, you have better dependency descriptions for your code).

We should be able to then remove centos-release-openstack-* from our
centos base images too [5], which will make life easier for projects
such as triple-o who have to work-around that.

If you have concerns, please reach out either via mail or in
#openstack-infra

Thank you,

-i

[1] https://docs.openstack.org/infra/bindep/
[2] 
https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/data/bindep-fallback.txt
[3] https://review.openstack.org/519533
[4] https://review.openstack.org/519534
[5] https://review.openstack.org/519535

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][vitrage] Networkx version 2.0

2018-01-07 Thread Ian Wienand

On 12/21/2017 02:51 AM, Afek, Ifat (Nokia - IL/Kfar Sava) wrote:

There is an open bug in launchpad about the new release of Networkx
2.0, that is backward incompatible with versions 1.x [1].


From diskimage-builder's POV, we can pretty much switch whenever
ready, just a matter of merging [2] after constraints is bumped.

It's kind of annoying in the code supporting both versions at once.
If we've got changes ready to go with all the related projects in [1]
bumping *should* be minimal disruption.

-i


[1] https://bugs.launchpad.net/diskimage-builder/+bug/1718576

[2] https://review.openstack.org/#/c/506524/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][requirements] CentOS libvirt versus newton/ocata libvirt-python

2018-01-11 Thread Ian Wienand
Hi,

So I guess since CentOS included libvirt 3.2 (7-1708, or around RHEL
7.4), it's been incompatible with libvirt-python requirements of 2.1.0
in newton [1] and 2.5.0 in ocata [2] (pike, at 3.5.0, works).

Do we want to do anything about this?  I can think of several options

* bump the libvirt-python versions on older branches

* Create an older centos image (can't imagine we have the person
  bandwidth to maintain this)

* Hack something in devstack (seems rather pointless to test
  something so far outside deployments).

* Turn off CentOS testing for old devstack branches

None are particularly appealing...

(I'm sorry if this has been discussed, I have great déjà vu about it,
maybe we were talking about it at summit or something).

-i

[1] 
http://logs.openstack.org/48/531248/2/check/legacy-tempest-dsvm-neutron-full-centos-7/80fa903/logs/devstacklog.txt.gz#_2018-01-09_05_14_40_960
[2] 
http://logs.openstack.org/50/531250/2/check/legacy-tempest-dsvm-neutron-full-centos-7/1c711f5/logs/devstacklog.txt.gz#_2018-01-09_20_43_08_833

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][requirements] CentOS libvirt versus newton/ocata libvirt-python

2018-01-11 Thread Ian Wienand

On 01/12/2018 02:53 PM, Matthew Thode wrote:

First, about newton, it's dead (2017-10-11).


Yeah, there were a few opt-outs, which is why I think devstack still
runs it.  Not worth a lot of effort.


Next, about ocata, it looks like it can support newer libvirt, but
just because a distro updated a library doesn't mean we have to
update.  IIRC, for ubuntu they use cloud-archives to get the right
version of libvirt, does something like that exist for
centos/redhat?


Well cloud-archives is ports of more recent things backwards, whereas
I think we're in a situation of having too recent libraries in the
base platform.  The CentOS 7.3 v 7.4 situation is a little more subtle
than Trusty v Xenial, say, but fundamentally the same I guess.  The
answer may be "Ocata not supported on 7.4".

p.s. I hope I'm understanding the python-libvirt compat story
correctly.  AIUI any newer python-binding release will build against
older versions of libvirt.  But an old version of python-libvirt may
not build against a newer release of the C libraries?

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >