Re: [OpenStack-Infra] Infra priorities and spec cleanup

2016-06-20 Thread Jeremy Stanley
On 2016-06-08 23:08:16 +1000 (+1000), Joshua Hesketh wrote:
> On Mon, Jun 6, 2016 at 9:21 AM, Jeremy Stanley  wrote:
> [...]
> > Store Build Logs in Swift
> [...]
> > We should remove the original spec from our priority list (since
> > that's basically already ceased to be an actual priority), and
> > probably supercede it with the AFS proposal.
[...]
> Additionally the urgency of this spec seems to have been reduced (due to
> limiting the retention on logs). We should perhaps reconsider if it's a
> priority spec or not after we've decided on a path forward.

That makes sense, though I think we can agree the original spec (and
accompanying implementation) has ceased to be treated as a priority
so it's a bit disingenuous to leave it on our priority list.

Anyway, I've pushed a cleanup/update change at
https://review.openstack.org/331903 which:

  * removes logs-in-swift
  * replaces maniphest with task-tracker
  * adds nodepool-zookeeper-workers due to its coupling with zuulv3
  * updates the Gerrit query string/URL accordingly

I'll put it on the meeting agenda now for formal council vote this
week. If approved, this leaves us at 6 priority specs, so if we
consider that we were pretty well saturated at 8 last cycle, we can
also discuss adding a couple more we expect to be spending
significant time on over the remainder of this cycle or whether
sticking with those 6 is perhaps better for focusing our efforts?
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Infra priorities and spec cleanup

2016-06-21 Thread Jeremy Stanley
On 2016-06-21 17:22:15 +1000 (+1000), Joshua Hesketh wrote:
> Good update, thanks fungi.
> 
> Just a thought, given the pain we felt yesterday when static.o.o was down,
> we should consider if a log solution needs to be a priority. Using afs (or
> swift) could allow us to scale static.o.o horizontally.

Yes, and it's not the only lengthy static.o.o outage we've had over
the past month either. I agree that solving it is a good candidate
for prioritization, but we need to go back and choose between a
couple of options on the table there.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Infra priorities and spec cleanup

2016-06-21 Thread Jeremy Stanley
On 2016-06-21 18:16:49 +0200 (+0200), Thierry Carrez wrote:
> It hurts a lot when it's down because of so many services being served from
> it. We could also separate the published websites (status.o.o,
> governance.o.o, security.o.o, releases.o.o...) which require limited
> resources and grow slowly, from the more resource-hungry storage sites
> (logs.o.o, tarballs.o.o...).

Agreed, that's actually a pretty trivial change, comparatively
speaking.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Infra priorities and spec cleanup

2016-06-21 Thread Jeremy Stanley
On 2016-06-21 17:34:07 + (+), Jeremy Stanley wrote:
> On 2016-06-21 18:16:49 +0200 (+0200), Thierry Carrez wrote:
> > It hurts a lot when it's down because of so many services being served from
> > it. We could also separate the published websites (status.o.o,
> > governance.o.o, security.o.o, releases.o.o...) which require limited
> > resources and grow slowly, from the more resource-hungry storage sites
> > (logs.o.o, tarballs.o.o...).
> 
> Agreed, that's actually a pretty trivial change, comparatively
> speaking.

Oh, though it bears mention that the most recent extended outage
(and by far longest we've experienced in a while) would have been
just as bad either way. It had nothing to do with recovering
attached volumes/filesystems, but rather was a host outage at the
provider entirely outside our sphere of control. That sort of issue
can potentially happen with any of our servers/services no matter
how much we split them up.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Please change owner of fuel-plugin-xenserver-ci

2016-06-30 Thread Jeremy Stanley
On 2016-06-29 20:29:08 -0700 (-0700), Elizabeth K. Joseph wrote:
> On Wed, Jun 29, 2016 at 8:32 AM, Bob Ball  wrote:
> > The fuel-plugin-xenserver-ci gerrit group
> > (https://review.openstack.org/#/admin/groups/1450,info) was created as a
> > result of https://review.openstack.org/#/c/334558/ - please could the owner
> > of the group be updated to be fuel-plugin-xenserver-core so we can modify
> > the members of the fuel-plugin-xenserver-ci group.
> 
> I don't think we tend to change the *owner* of groups, but since this
> is your group from your change, I've gone ahead and added you as a
> member of this group so you can modify the members of it.

The ...CI groups are for granting voting privs to third-party CI
accounts on a per-project basis. As such, we generally make them
owned by the corresponding core review group rather than self-owned
(because you don't want CI operators adding voting rights for other
CI operators).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ubuntu cloud archive (UCA) AFS mirror now live

2016-06-30 Thread Jeremy Stanley
On 2016-06-30 07:29:38 -0400 (-0400), David Moreau Simard wrote:
> At what frequency does the upstream UCA repository change and how
> quickly does the AFS repository pick them up ?

I don't know what Ubuntu's update frequency is, but the cron jobs
defined in our openstack_project::mirror_update class fire every 2
hours currently.

http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/mirror_update.pp
 >

-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Add 3rd party CI to gerrit group

2016-07-21 Thread Jeremy Stanley
On 2016-07-21 15:58:55 +0200 (+0200), Artur Zarzycki wrote:
> we've in  fuel-ccp-* repos[1] release and core group. We also created
> fuel-ccp-ci[2] group for 3rd party CI  and in docs[4] about permissions for
> 3rd party CI is wrote "the release group for that program or project can add
> you to the -ci group specific to that program/project.", but we
> don't see permissions to add any account to fuel-ccp-ci group, should we ask
> you about add one of us to fuel-ccp-ci group(to add 3rd party CI[3]) or we
> can ask just about add  3rd party CI[3] to this group?

The -ci groups are handled a little differently since the goal is to
give your core reviewers control over which CI systems can leave
Verify votes on your project's changes. I have made fuel-ccp-core
the owner of the fuel-ccp-ci group, so anyone in fuel-ccp-core can
add and remove accounts from it without being a member of thatt
group themselves.

https://review.openstack.org/#/admin/groups/1487,info

-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Verify job failure for group-based-policy

2016-07-22 Thread Jeremy Stanley
On 2016-07-22 19:58:51 + (+), Thomas Bachman wrote:
> We are having trouble with our verify jobs for stable/liberty. The
> jobs are failing due to an exception in python-novaclient. We
> debugged this, and it appears to be using a different version of
> python-novaclient than the stable/liberty version. According to
> upper-constraints, it should be this:
[...]
> Is there something wrong with the environment? Is this something
> we control/affect in our GBP jenkins setup?

See the tox logs for details on what pip command was run and who
requested which versions of things...

http://logs.openstack.org/98/345598/1/check/gate-group-based-policy-python27/7511066/tox/py27-1.log.txt
 >

In short, I don't see any "-c upper-constraints.txt" in the pip
command line, and indeed your stable/liberty tox.ini isn't set up to
apply a constraints file at all...

http://git.openstack.org/cgit/openstack/group-based-policy/tree/tox.ini?h=stable%2Fliberty

You probably want to try cherry-picking a backport of
https://review.openstack.org/298959 to that branch.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Work toward a translations checksite and call for help

2016-07-25 Thread Jeremy Stanley
On 2016-07-25 11:08:35 +0530 (+0530), Vipul Nayyar wrote:
> Honestly, I was also thinking that using containers for implementing
> blue/green deployment would be best for implementing minimal downtime. I
> suggest having a basic run-through of this idea with the community over
> tomorrow's irc meeting should be a good start.

Waving containers at the problem doesn't really solve the
fundamental issue at hand (we could just as easily use DNS or an
Apache redirect to switch between virtual machines, possibly more
easily since we already have existing mechanisms for deploying and
replacing virtual machines). The issue that needs addressing first,
I think, is how to get new DevStack deployments from master branch
tip of all projects to work consistently at each rebuild interval
or, more likely, to design a pattern that avoids replacing a working
deployment with a broken one along with some means to find out that
redeployment is failing so that it can effectively be troubleshot
post-mortem.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Work toward a translations checksite and call for help

2016-08-01 Thread Jeremy Stanley
On 2016-08-01 16:08:49 +0200 (+0200), Ricardo Carrillo Cruz wrote:
[...]
> The set DNS task would check a file on the puppetmaster which contains the
> state of blue/green DNS records (translate-latest.openstack.org pointing to
> translate_a and translate-soon-to-be-deleted.openstack.org pointing to
> translate_b or viceversa) and would only run in case any of the preceding
> create_server tasks did anything.
[...]

Problem is we can't (okay, shouldn't) automate DNS changes while
we're relying on Rackspace's DNS service, since it's not using a
standard OpenStack API and we really don't want to write additional
tooling to it.

As mentioned in my earlier E-mail, a simple alternative is to just
update a HTTP 302 (temporary) redirect or a rewrite/proxy to the
"live" deployment in an Apache vhost on static.openstack.org or
perhaps update a persistent haproxy pool. Proxying rather than
redirecting probably makes the most sense as we can avoid presenting
IP-address-based URLs to the consumer (and if we're forced to deploy
with TLS then we might be able to stabilize a solution for that at
the proxy too).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Work toward a translations checksite and call for help

2016-08-01 Thread Jeremy Stanley
On 2016-08-01 16:46:07 +0200 (+0200), Ricardo Carrillo Cruz wrote:
> In my mind, I thought set_dns would be really an ansible wrapper to
> system-config launch/dns.py script.
[...]

There's a reason why that script only tells you what commands to
run, and doesn't run them for you. At least that way we can still
assert that we're not writing automation to communicate with
Rackspace's (proprietary, non-free, nonstandard, non-OpenStack) DNS
API if a sysadmin has to manually run commands to update records
through it. Then it's no worse on a philosophical level than using a
Web browser to make DNS changes through their similarly proprietary
dashboard site.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Work toward a translations checksite and call for help

2016-08-01 Thread Jeremy Stanley
On 2016-08-01 19:26:07 +0200 (+0200), Frank Kloeker wrote:
> broken DevStack installation - that's the point. With LXD
> container you can take snapshot, run unstack or clean script,
> fetch new code and stack again. If it failed you can restore the
> snapshot and try new installation on another day. Without snapshot
> you can start new container with new code and shutdown the old
> one. So I like the idea with haproxy in front but wouldn't change
> any DNS entries because it takes time for end-users.

Sure, but those are also things we can do (and already do) with
virtual machines in many places, while we're not currently
maintaining any container-based services at all. I'm just saying
that "use containers" isn't a solution per se, and we should first
focus on the patterns we'll use to bootstrap DevStack, check that it
works and switch rather than jumping to whether this
needs to be containerized to make that possible.

> If you have enough resources then we can work with 3 VMs: 2
> DevStack installation with translation check-site and one with
> haproxy hosting the public FQDN and a kind of trigger to refresh
> the installation on the DevStack VM _if_ the other VM is up. If
> the other DevStack service is down, the trigger should try an
> unstack/clean/stack after one day and switch over if the service
> is up. This could be done with lb-update
> (https://www.haproxy.com/doc/aloha/7.0/haproxy/lbupdate.html) or
> haproxy API. The process should have a small monitoring about the
> status.

I'm hesitant to rely on unstack/clean/stack working consistently
over time, though maybe others have seen them behave more reliably
than I think they do. I had assumed we'd replace with fresh servers
each time and bootstrap DevStack from scratch, though perhaps that's
overkill?
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Need membership in gerrit group

2016-08-01 Thread Jeremy Stanley
On 2016-08-01 22:31:37 +0530 (+0530), Amit Saha wrote:
> We finally got out code in. However, we are seeing that even though the
> code is present in https://git.openstack.org/openstack/python-don, it has
> not yet been mirrored to https://github.com/openstack/python-don. Do we
> need to do anything to trigger the mirroring?

I've fixed it in the past few minutes. GitHub's API is unreliable,
and from time to time (too frequently if you ask me) our automation
fails to grant the Gerrit server permission to push into the newly
created repo in GitHub so requires manual intervention to correct.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Pholio Spec 340641

2016-08-02 Thread Jeremy Stanley
On 2016-08-01 11:51:23 +1000 (+1000), Craige McWhirter wrote:
> I now consider this change to be no longer a Work in Progress.
[...]

Excellent. To that end, in today's weekly Infra team meeting I
proposed your https://review.openstack.org/340641 for council vote
until 19:00 UTC on Thursday (August 4). Unless there are serious
objections by then, I'll go ahead and approve it.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Missing gerrit->launchpad link for networking-calico?

2016-08-08 Thread Jeremy Stanley
On 2016-08-08 16:32:48 + (+), Neil Jerram wrote:
> For networking-calico changes with a Closes-Bug in the commit message,
> we're not getting 'Fix proposed' comments in Launchpad when someone posts a
> patch to review.openstack.org.  But we do get 'Fix released' comments when
> a patch is eventually merged.
> 
> For example: https://bugs.launchpad.net/networking-calico/+bug/1602313
> 
> Does that indicate a missing link somewhere?

Yes, LP lists your "bug supervisor" at
https://bugs.launchpad.net/networking-calico as "Neil Jerram
(neil-jerram)" but you need that to be an LP group instead with
the "OpenStack Infra (hudson-openstack)" as a group member in
addition to whoever else you want to be working on bug triage.

    http://docs.openstack.org/infra/manual/creators.html#create-bug-tracker

-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Manually moving irc archives from #kolla to #openstack-kolla

2016-08-10 Thread Jeremy Stanley
On 2016-08-10 13:42:45 -0400 (-0400), Paul Belanger wrote:
> On Tue, Aug 09, 2016 at 08:25:22PM +, Steven Dake (stdake) wrote:
> > Several months ago Kolla changed its irc channel from #kolla to
> > #openstack-kolla.  We log our irc via eavesdrop.  Is it possible
> > for anyone with infra root access to manually move the irc
> > channel logs from #kolla to #openstack-kolla?  If there is a
> > little data lost from the 1 day overlap change, that is ok.
> > 
> > If its not possible, I understand.
> > 
> I'm not sure we've done this before.  We _can_ move the log files
> to the new directory, however I think we should leave them for
> historical purposes.

In particular, people may have linked to the URL of some old log
entries, and we probably don't want to indefinitely maintain a
growing list of redirects if teams start asking to shuffle their
logfiles around for convenience.

As counterpoint, we switched our meeting name from "ci" to "infra"
at the beginning of 2013 and never moved our own meeting logs to the
new directory on eavesdrop.o.o. Instead we just have a note at the
bottom of our meeting agenda pointing people to the old URL of our
earlier meetings from before we renamed.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] MeetBot taking unauthorised vacation

2016-08-17 Thread Jeremy Stanley
On 2016-08-17 11:57:01 -0400 (-0400), Anita Kuno wrote:
> The last change to the file that contains the list of channels for
> meetbot merged yesterday at 15:23 GMT
[...]

According to its logs (times in UTC):

WARNING 2016-08-17T10:00:22 supybot Ping sent at
2016-08-17T09:58:22 not replied to.
WARNING 2016-08-17T10:00:22 supybot Error message from FreeNode:
Ping sent at 2016-08-17T09:58:22 not replied to.
INFO 2016-08-17T10:00:22 supybot Reconnecting to FreeNode.

...and this continued until:

INFO 2016-08-17T10:15:19 supybot Reconnecting to FreeNode.
WARNING 2016-08-17T10:15:19 supybot Disconnect from
chat.freenode.net:7000: Connection to the other side was
lost in a non-clean fashion: Connection lost.
INFO 2016-08-17T10:15:22 supybot Connecting to
chat.freenode.net:7000.
INFO 2016-08-17T10:15:32 supybot Server orwell.freenode.net has
version ircd-seven-1.1.3
INFO 2016-08-17T10:15:32 supybot Got end of MOTD from
orwell.freenode.net
INFO 2016-08-17T10:15:32 supybot Sending identify (current nick:
openstack)
INFO 2016-08-17T10:15:38 supybot Received "Password accepted"
from NickServ on FreeNode.

...after which it began joining channels successfully again. So this
was a random problem in Freenode, or in our service provider, or
with the Internet at large. Definitely not driven by anyone
approving a configuration change while meetings were underway.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] Design summit session planning

2016-08-24 Thread Jeremy Stanley
As discussed in yesterday's team meeting[1], it's time to start
thinking ahead to our Ocata design summit sessions. I've created a
planning pad[2] with some information on the format and constraints.

Consensus seems to be that we tend to get more out of workrooms than
fishbowls, so if we're going to request fewer of one than we had at
the Newton summit I think it should be fishbowls. To that end, I've
put us down for one fishbowl and five workrooms this time, but could
be convinced to shift that balance to two fishbowls and four
workrooms if the early ideas list favor fishbowl format more than
they have in previous cycles (we still have a week to adjust our
official allocation request).

[1] 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-08-23-19.01.html
[2] https://etherpad.openstack.org/p/infra-ocata-summit-planning
[3] 
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Pholio Spec 340641

2016-08-26 Thread Jeremy Stanley
On 2016-08-26 00:16:12 -0300 (-0300), Sebastian Marcet wrote:
> ok Craige, once i got approval for this
> https://review.openstack.org/360862

Which merged a couple hours later at 05:12 UTC, so should presumably
be working for the past ~10 hours.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] docs.openstack.com excessive INODE consumption

2016-08-28 Thread Jeremy Stanley
On 2016-08-27 14:59:32 + (+), Tyler Coil wrote:
> I wanted to bring to your attention a ticket recently created for
> docs.openstack.com and in regards to the large amount of files its
> consuming in the Cloud Sites environment. In the ticket
> 160812-dfw-0001549 there are more details in regards to the issue.
[...]

Thanks! I've been meaning to follow up on that ticket... in summary
we're currently finalizing a plan to relocate our content out of
Cloud Sites entirely: https://review.openstack.org/276482

It will likely be on the order of a couple months before we have the
content moved. The ticket didn't indicate what sort of time
constraints you might be under for a resolution.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [app-catalog] Glare support in apps.openstack.org

2016-09-08 Thread Jeremy Stanley
On 2016-09-02 15:22:43 +0200 (+0200), Bartosz Kupidura wrote:
[...]
> our current plan is:
> 
> 1) create 'glare-support' branch in openstack/app-catalog
> 2) create 'glare-support' branch in openstack-infra/system-config
> 3) create 'glare-support' branch in openstack-infra/apps_site
> 4) move changes introduced by SSkrypnik in 
> https://github.com/redixin/app-catalog/tree/dev to openstack/app-catalog 
> gerrit
> 5) create 'openstack/puppet-glare' repo
> 6) create puppet automation for glare in openstack/puppet-glare
> 7) deploy stagging.apps.openstack.org from 'glare-support' branches
> 8) switch apps.openstack.org to stagging.apps.openstack.org
> 9) merge ‚glare-support’ branch to master for openstack/app-catalog
> 10) merge ‚glare-support’ branch to master for 
> openstack-infra/puppet-apps_site
> 11)  merge ‚glare-support’ branch to master for openstack-infra/system-config 
> (in this step jenkins should put +1)
> 10) remove old apps.openstack.org
[...]

This looks basically like what we discussed in IRC last week. It
seems like a fine plan to me, and since nobody else has objected I
don't see any reason for you to further delay implementation.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [app-catalog] Glare support in apps.openstack.org

2016-09-10 Thread Jeremy Stanley
On 2016-09-02 15:22:43 +0200 (+0200), Bartosz Kupidura wrote:
[...]
> 1) create 'glare-support' branch in openstack/app-catalog

I've branched feature/glare-support from the current master state
in openstack/app-catalog.

> 2) create 'glare-support' branch in openstack-infra/system-config
> 3) create 'glare-support' branch in openstack-infra/apps_site
[...]

I missed this was in the revised plan. We can't branch the
system-config and puppet-apps_site repos for this, nor should we.
Rather, you need to introduce a minimal amount of additional
configuration management in these to be able to handle the
production and staging sites with the ability to specify which Git
ref you want used from app-catalog on each.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Design summit session planning

2016-09-26 Thread Jeremy Stanley
Hopefully everyone who went to the sprint is back (you'll have to
tell the rest of us how it went!) and mostly recovered. More to the
point, here's hoping you came back with exciting ideas for what we
should be talking about in Barcelona **NEXT MONTH**.

I've updated our planning pad[0] to reflect Thierry's awesome
scheduling work[1]. I've also put the planning topic back on the
meeting agenda[2] for tomorrow (which is also known as today for
those of you in some parts of the World where I don't live). Let's
have some brief brainstorming if there's time, and of course keep
padding the pad.

[0] https://etherpad.openstack.org/p/infra-ocata-summit-planning
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103851.html
[2] 
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [openstack-dev] [App-Catalog][Glare][Infra] Vm for app-catalog

2016-10-04 Thread Jeremy Stanley
On 2016-10-04 19:08:25 -0700 (-0700), Christopher Aedo wrote:
[...]
> From the last conversation we had around this[1], we would be at step
> 7 "deploy stagging.apps.openstack.org from 'glare-support' branches".
> There was one point fungi brought up[2], which was that we would not
> be creating special branches for this work.  Rather, the puppet
> manifest[3] would be adjusted to accept git commit IDs where un-merged
> code was called for.
[...]

By "un-merged" here I assume you mean commits merged to the
feature/glare-support branch of the openstack/app-catalog repo even
though that branch has not yet been merged back into the master
branch.

The openstack-infra/puppet-apps_site repo looks like it already
supports the logic you need: it has a $commit parameter which takes
an arbitrary Git reference, and defaults to 'master' so that your
apps.openstack.org server is continuously deployed with the master
branch tip of openstack/app-catalog. That was the relatively
complicated logic and it looks like it was implemented originally,
so you've actually already done the hard part there.

In the openstack-infra/system-config repo you need to update the
global site manifest (in manifests/site.pp) adding a separate dev
server (probably named more like apps-dev.openstack.org for
consistency with most of our other dev servers) and pass in
commit=>'feature/glare-support' telling it to deploy from the tip of
the openstack/app-catalog repo's feature/glare-support branch.

After that's done, an infra-root sysadmin needs to manually launch a
server named apps-dev.openstack.org and help troubleshoot any
configuration management errors which might arise from incomplete
automated testing.

Once you're satisfied with the state of feature/glare-support as
evidenced from using the dev server, merge that branch back into
master and the continuous automation you already have will make it
live on the production apps.openstack.org server.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Design summit session planning

2016-10-08 Thread Jeremy Stanley
It's approaching time to finalize our session schedule. Just to
recap, we have five workrooms, one fishbowl and a half-day sprint
alloted. There are six good ideas on the planning pad[*] now, so
unless anyone thinks those are things we _shouldn't_ cover in one of
our six session slots or comes up with better alternative topics
_very_ quickly, it's just a matter of picking which times we want
for each topic and which one gets the fishbowl:

  1. status update and plans for task tracking
  2. document/reset test environment expectations
  3. discuss next steps for infra-cloud
  4. interactive infra-cloud debugging
  5. plan or work on further expansion of firehose
  6. finish the Xenial jobs transition for stable/newton

My suggestion is to put task tracking (1) in the Thursday afternoon
fishbowl as it has broad cross-project implications and presumably a
wider audience. Of the five remaining topics, two are about
infra-cloud (3,4) and two are about test environments/jobs (2,6) so
it might make the most sense to pair those up back-to-back in our
Friday morning workrooms. That leaves firehose (5) as the straggler
for the Wednesday afternoon workroom. Any other ideas?

[*] https://etherpad.openstack.org/p/infra-ocata-summit-planning
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] draft logo for Infra

2016-10-21 Thread Jeremy Stanley
Courtesy of some artist(s?) contracted by the OpenStack Foundation,
attached find an iconic/logo draft rendition of the mascot ranked
highest by our aggregate contributor base: an ant.

If you have feedback, inject it at http://tinyurl.com/OSmascot
before November 11 (after which point they'll be working on
finalizing these). Sounds like they may put them on stickers or
other stuff in time for the PTG in Atlanta.
-- 
Jeremy Stanley
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Advice needed on Kolla's repository split

2016-11-08 Thread Jeremy Stanley
On 2016-11-08 10:27:25 -0800 (-0800), Clark Boylan wrote:
[...]
> If you set the upstream value all branches and tags from that upstream
> will be duplicated in Gerrit as part of the initial project setup. This
> happens before we update zuul's config so shouldn't trigger any release
> jobs or anything like that (though I suppose if it somehow managed to
> run out of order that could be a problem).
[...]

We've had cases where manage-projects took longer to complete than
the zuul config updates, so when the content was pushed into Gerrit
it fired tag events which zuul saw as triggers for release jobs. In
most cases those jobs _will_ fail, but I'm worried about the corner
corner case where they don't and then we have a mess to clean up.
Not adding release jobs in the same change which instructs import of
another repo already within our infrastructure (such that release
artifacts of the donor repo might get overwritten) seems like a
sane compromise.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Logs for Ask.O.o - chasing false positive spam labeling

2016-11-09 Thread Jeremy Stanley
On 2016-11-08 19:12:32 +0800 (+0800), Tom Fifield wrote:
[...]
> Upstream apparently revamped the spam system in the version marked
> to upgrade to in:
> https://review.openstack.org/#/c/274032/

I've gone ahead and approved this just now... I was unaware it was
out there waiting for approval. Sorry about that! I'll try to
double-check that ask.o.o is still working correctly once it gets
applied.

> However, in order to make sure we're not losing up to 60% (akismet stat for
> October) of our potential legitimate posts while we wait for that, it would
> be great if there were some logs to try and find out what's going on.
> 
> Anyone able to dig and send me something?

I'm happy to. The Askbot application logs seem to contain nothing of
relevance, so I'm assuming you want the Apache logs in this case.
What timeframe are you interested in? Our retention right now is
on the order of several gigabytes compressed so don't want to
inundate you with the entirety if a subset will suffice.

> (If you got curious and want something to grep for, try
> 2001:638:70e:11:2ad2:44ff:*:* || 136.172.17.* )

I see a bunch of current hits for that v6 prefix, though not finding
any for the v4 one.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Logs for Ask.O.o - chasing false positive spam labeling

2016-11-09 Thread Jeremy Stanley
On 2016-11-09 18:11:39 + (+), Jeremy Stanley wrote:
> On 2016-11-08 19:12:32 +0800 (+0800), Tom Fifield wrote:
> [...]
> > Upstream apparently revamped the spam system in the version marked
> > to upgrade to in:
> > https://review.openstack.org/#/c/274032/
> 
> I've gone ahead and approved this just now... I was unaware it was
> out there waiting for approval. Sorry about that! I'll try to
> double-check that ask.o.o is still working correctly once it gets
> applied.
[...]

Just to follow up, it looks like the git_resource for
/srv/dist/askbot did not get updated to the newly specified commit
in 274032 so we're digging into why that is. The site still seems to
be up and working for now.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Logs for Ask.O.o - chasing false positive spam labeling

2016-11-09 Thread Jeremy Stanley
On 2016-11-09 19:53:27 + (+), Jeremy Stanley wrote:
> On 2016-11-09 18:11:39 + (+), Jeremy Stanley wrote:
> > On 2016-11-08 19:12:32 +0800 (+0800), Tom Fifield wrote:
> > [...]
> > > Upstream apparently revamped the spam system in the version marked
> > > to upgrade to in:
> > > https://review.openstack.org/#/c/274032/
> > 
> > I've gone ahead and approved this just now... I was unaware it was
> > out there waiting for approval. Sorry about that! I'll try to
> > double-check that ask.o.o is still working correctly once it gets
> > applied.
> [...]
> 
> Just to follow up, it looks like the git_resource for
> /srv/dist/askbot did not get updated to the newly specified commit
> in 274032 so we're digging into why that is. The site still seems to
> be up and working for now.

After a while, the site began to throw internal server errors, so
I'm reverting with https://review.openstack.org/395797 for now until
we can more thoroughly troubleshoot.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Logs for Ask.O.o - chasing false positive spam labeling

2016-11-10 Thread Jeremy Stanley
On 2016-11-10 09:19:32 + (+), Marton Kiss wrote:
> Jeremy, I can apply and test this patch in a current test environment, it
> was sitting there for a while. Usually the config changes of askbot broke
> the site.
[...]

If you get a chance, that would be a big help. I have logs from the
failed upgrade, but the gist is that the git resource provider
didn't update /srv/dist/askbot (Puppet's log never even mentions it
trying to do so) and then then migrate command threw:

AttributeError: 'Settings' object has no attribute 'ASKBOT_MULTILINGUAL'

Which their upgrade FAQ says is an indication that the urls.py
template needs to be updated (and that makes sense given that the
git repo never moved to the newer commit we specified). I mulled
over possibilities with others in #openstack-infra, and Spencer
suggested that latest=>true may be causing calls into the provider
to short-circuit since it always returns true if a commit or tag is
passed. The next round, I was going to try dropping that from the
commit and tag cases in puppet-askbot and seeing if it helps.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [openstack-dev][infra][neutron] Intel NFV CI voting permission in Neutron

2016-11-14 Thread Jeremy Stanley
On 2016-11-14 10:44:42 + (+), Znoinski, Waldemar wrote:
> I would like to acquire voting (+/-1 Verified) permission for our
> Intel NFV CI.
[...]

The requested permission is configured by addition to
https://review.openstack.org/#/admin/groups/neutron-ci which is
controlled by the members of the
https://review.openstack.org/#/admin/groups/neutron-release group.
The Infra team tries not to be involved in these decisions and
instead prefers to leave them up to the project team(s) involved.

> This e-mail and any attachments may contain confidential material
> for the sole use of the intended recipient(s). Any review or
> distribution by others is strictly prohibited.
[...]

This seems wholly inappropriate for a public mailing list. I
strongly recommend not sending messages to our mailing lists in
which you strictly prohibit review or distribution by others, as it
is guaranteed to happen and we cannot prevent that (nor would we
want to).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] About setting up CI

2016-11-18 Thread Jeremy Stanley
On 2016-11-18 05:23:04 -0600 (-0600), Mikhail Medvedev wrote:
> There is also puppet-openstackci module that aims to provide most of
> configuration necessary to setup an OpenStack CI. See the
> documentation at
> https://github.com/openstack-infra/puppet-openstackci/blob/master/doc/source/third_party_ci.rst

The rendered version is continuously published at:
http://docs.openstack.org/infra/openstackci/
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] ask.openstack.org full disk

2016-11-22 Thread Jeremy Stanley
On 2016-11-21 17:22:29 +1100 (+1100), Ian Wienand wrote:
[...]
> After a little poking the safest way to clear some space seemed to
> be the apt cache which gave some breathing room.
[...]
> Disk is still tight on this host.  Someone who knows a little more
> about the service might like to go clear out anything else that is
> unnecessary.

Thanks! I removed a few old manual backups from some of our homedirs
(mostly mine!) freeing up a few more GB on the rootfs. The biggest
offender though seems to be /var/log/jetty which has about a week of
retention. Whatever's rotating these daily at midnight UTC (doesn't
seem to be logrotate doing it) isn't compressing them, so they're up
to nearly 13GB now (which is a lot on a 40GB rootfs).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Gerrit GUI Follow-up button

2016-11-28 Thread Jeremy Stanley
On 2016-11-26 18:34:55 + (+), Henry Fourie wrote:
> There is a description on usage of the Gerrit Web UI Follow-up
> button.
[...]
> Is there a link to this from current openstack review docs, or
> is there other documentation?
[...]

https://review.openstack.org/Documentation/user-inline-edit.html

While logged in I can see the Follow-Up button (right after Cherry
Pick, Rebase and Abandon buttons on an open change) and the Create
Change button (on the General page for a project), but these may be
controlled by an ACL and so only visible for me as a member of the
Administrators group. I take it they're not visible for you?
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Gerrit GUI Follow-up button

2016-11-28 Thread Jeremy Stanley
On 2016-11-28 17:12:34 + (+), Henry Fourie wrote:
> I can see the Follow-up button. My question is about its usage.
> Is there openstack documentation to explain its usage?
[...]

I'm going to assume that the documentation I linked to in my
previous message is not what you're looking for. What kind of
documentation do you have in mind? We don't make Gerrit, so we rely
on the documentation written by the people who do. What is it
missing that you want it to cover?
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] FW: [infra] some issue with the osic environment? (as opposed to rax)

2016-11-29 Thread Jeremy Stanley
On 2016-11-29 14:28:32 + (+), Amrith Kumar wrote:
> Re-sending to openstack-infra mailing list
[...]
> We've been trying to get all of the trove CI tests to work with neutron and
> have made a lot of progress but are now hampered by something that appears
> to be out of our control.
[...]

Resending my reply to the ML as well, though we already hashed
through this in IRC:

Just to follow up, this is still https://launchpad.net/bugs/1629133
where Trove needs subnet pool support in DevStack to be able to use
Neutron but the subnet pool routing conflicts with local routes in
some service providers (namely OSIC). Kevin Benton addressed this
with change https://review.openstack.org/398012 to DevStack adding
an optional setting which will be consumed by Monty's change
https://review.openstack.org/398611 to devstack-gate (hopefully to
merge within the next few hours).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [infra][StoryBoard] Meeting Time Rearrangment

2016-12-02 Thread Jeremy Stanley
On 2016-11-15 21:20:20 + (+), Adam Coldrick wrote:
> Last week at the StoryBoard meeting we discussed the fact that the
> meeting time has become inconvenient for most of the people who attend.
> As a result, we took the decision to move the meeting slot to
> 
> 1900 UTC on Wednesdays in #openstack-meeting
[...]

Just following up, I approved this out of the moderation backlog but
suspect it's a duplicate (at any rate that's why it's dated from a
few weeks ago).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] pypi volume downtime

2016-12-05 Thread Jeremy Stanley
On 2016-12-05 10:10:22 -0600 (-0600), Kevin L. Mitchell wrote:
> On Mon, 2016-12-05 at 15:30 +1100, Ian Wienand wrote:
> > As for the root cause, I don't see anything else particularly
> > insightful in the logs.  The salvage server logs, implicated above,
> > end in Feburary which isn't very helpful
> > 
> > --- SalsrvLog.old ---
> >  12/02/2016 04:19:59 SALVAGING VOLUME 536870931.
> >  12/02/2016 04:19:59 mirror.pypi (536870931) updated 12/02/2016 04:15
> >  12/02/2016 04:20:02 totalInodes 1931509
> >  12/02/2016 04:53:31 Salvaged mirror.pypi (536870931): 1931502 files,
> > 442808916 blocks
> 
> For the record, those log entries are from December 2nd, rather than
> February: US date conventions.

Indeed, I wonder if OpenAFS has options to change that to something
closer to ISO-8601 date/time field ordering. I've briefly
searched/skimmed and am not spotting any if so. Unfortunate.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] [Infra] Ocata Summit Infra Sessions Recap

2016-12-06 Thread Jeremy Stanley
andle on a daily basis but our developers may
not have directly experienced.

As well-intentioned as it was, the session suffered from several
issues. First and foremost we didn't realize the Friday morning
workroom we got was going to lack a projector (only so many people
can gather around one laptop, and if it's mine then fewer still!).
Trying to get people from lots of different projects to show up for
the same slot on a day that isn't for cross-project sessions is
pretty intractable. And then there's the fact that we were all
approaching burnout as it was the last day of the week and coffee
was all the way at the opposite end of the design summit space. :/

Instead the time was spent partly continuing the "future of
infra-cloud" discussion, and partly just talking about random things
like troubleshooting CI jobs (some people misunderstood the session
description and thought that's what we had planned) or general Infra
team wishlist items. Not a complete waste, but some lessons learned
if we ever want to try this idea again at a future summit.


Test environment expectations
-

https://etherpad.openstack.org/p/ocata-infra-test-env-expectations

After the morning break we managed to perk back up again and discuss
test platform expectations. This was a remarkably productive
brainstorming session where we assembled a rough list of
expectations developers can and, more importantly, can't make about
the systems on which our CI jobs run. The culmination of these
musings can since be found in a shiny new page of the Infra Manual:

http://docs.openstack.org/infra/manual/testing.html


Xenial jobs transition for stable/newton


https://etherpad.openstack.org/p/ocata-infra-xenial-stable-newton

Another constructive session right on the heels of the last...
planning the last mile of the cut-over from Ubuntu 14.04 to 16.04
testing. We confirmed that we would switch all jobs for
stable/newton as well as master (since the implementation started
early in the Newton cycle and we need to be consistent across
projects in a stable branch). We decided to set a date (which
incidentally is TODAY) to finalize the transition. The plan was
announced to the dev ML a month ago:

http://lists.openstack.org/pipermail/openstack-dev/2016-November/106906.html

The (numerous) changes in flight today to switch the lingering jobs
are covered under a common review topic:

https://review.openstack.org/#/q/topic:st-nicholas-xenial


Unconference afternoon
--

https://etherpad.openstack.org/p/ocata-infra-contributors-meetup

At this stage things were starting to wind up and a lot of people
with early departures had already bowed out. Those of us who
remained were treated to our own room for the first time in many
summits (no offense to the Release and QA teams, but it was nice to
not need to share for a change). Since we were a little more at
liberty to set our own pace this time we treated it as a sort of
home base from which many of us set forth to pitch in on
Infra-related planning discussions in other teams' spaces, then
regroup and disseminate what we'd done (from translation platform
upgrades to release automation designs).

We also got in some good one-on-one time to work through topics
which weren't covered in scheduled sessions, such as Zuul v3 spec
additions or changes to the pep8 jobs to guard against missing sdist
build dependencies. As the afternoon progressed and the crowd
dwindled further we said our goodbyes and split up into smaller
groups to go out for one last meal, commiserate with those who found
themselves newly in search of employment and generally celebrate a
successful week in Barcelona.


That concludes my recollection of these sessions over the course of
the week--thanks for reading this far--feel free to follow up (on
the openstack-dev ML please) with any corrections/additions. Many
thanks to all who attended, and to those who could not: we missed
you. I hope to see lots of you again at the PTG in Atlanta, only a
couple months away now. Don't forget to register and book your
flights/lodging!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Nodepool config file structure

2016-12-20 Thread Jeremy Stanley
On 2016-12-20 10:19:26 -0800 (-0800), James E. Blair wrote:
[...]
> I think we should change the provider images section to separate out the
> parts pertaining to diskimages and those pertaining to flavors.
> Something like:
[...]
>   providers:
> - name: cloud
>   diskimages:
> - name: ubuntu-trusty
>   metadata:
> foo: bar
>   labels:
> - name: small-ubuntu-trusty
>   ram: 2g
> - name: large-ubuntu-trusty
>   ram: 8g
[...]
> Does this sound like a reasonable path forward?

I have a strong preference for the proposed model, as it still
allows for differing label-to-flavor mapping per provider while
simplifying the relationship between providers, images and flavors
compared to the old nodepool~=0.3 implementation.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [infra][docs] Steps to migrate docs.o.o to new AFS based server

2017-01-02 Thread Jeremy Stanley
On 2016-12-28 19:09:04 +0100 (+0100), Andreas Jaeger wrote:
[...]
> 3) Create docs-archived.openstack.org pointing to the old CloudDocs
>docs site so that we can still access any content that wasn't
>published recently. Update 404 handler to point to docs-archived.
[...]

I don't think this is technically possible with CloudSites, as each
domain name seems (last time I tried when api.o.o got renamed to
developer.o.o) to require creating a separate site with its own
content. I tried to check now to see whether they have a rename
option at least, but the old credentials I have on record for the
CloudSites admin portal seem to no longer work and instead claim we
aren't signed up for the service (maybe I'm trying the wrong account
and some other Infra sysadmin has the correct credentials on hand?).

What we _can_ do is just not delete it immediately, and then you can
temporarily override /etc/hosts locally to associate docs.o.o with
98.129.229.216 if you need to access it. Also keep in mind though
that Rackspace is eager to see us delete the content for those sites
because they're extremely large (at least by their standards), so
the sooner we can remove them entirely the better.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Logs for Ask.O.o - chasing false positive spam labeling

2017-01-02 Thread Jeremy Stanley
On 2016-12-30 12:38:02 +0800 (+0800), Tom Fifield wrote:
> This problem (legitimate users having all posts flatly rejected as
> spam) is still happening. Any progress?

We're basically ready to retry the previous upgrade now that some
issues have been identified/corrected by Marton. Change
https://review.openstack.org/408657 to trigger it is waiting on
https://review.openstack.org/416072 to grant him a shell account on
the production server for improved reactivity if it fails again, so
that he can collaborate with us more directly troubleshooting before
we have to make the call to fix or revert.

> I've been doing what I can without access to the server, but my
> latest attempt - completely deleting an account so it could be
> re-created by the affected user - was thwarted by a 500 error. Did
> that appear in the logs?

Given that I don't know the exact time you tried nor your IP
address, and am unsure what that failure would look like in the logs
aside from coming with a 500 error code this Apache access log entry
18 minutes before your E-mail's timestamp stands out:

[30/Dec/2016:04:20:40 +] "GET /admin/askbot/post/25842/
HTTP/1.1" 500 917
"https://ask.openstack.org/admin/auth/user/2253/delete/";
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:50.0) Gecko/20100101
Firefox/50.0"

I don't see anything related around that timeframe in the Apache
error log files, the Askbot application log, the Solr Jetty log...
though there's this in dmesg which is suspiciously close (timestamps
in dmesg aren't entirely reliable, so this could have happened 5
minutes earlier):

[Fri Dec 30 04:25:25 2016] apache2 invoked oom-killer:
gfp_mask=0x200da, order=0, oom_score_adj=0

And indeed, there's a spike in swap utilization around that time
which, given the five-minute granularity could certainly support the
notion that a runaway process ate all available virtual memory on
the system:

http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=2546&rra_id=2&view_type=&graph_start=1483029900&graph_end=1483156284&graph_height=120&graph_width=500&title_font_size=10
 >

The RAM usage graph suggests we were doing okay with a target
utilization of ~50% so something eating an additional 6GiB of memory
in a matter of a few minutes would definitely count as anomalous:

http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=2544&rra_id=2&view_type=&graph_start=1483029900&graph_end=1483156284&graph_height=120&graph_width=500&title_font_size=10
 >

Was the memory spike caused by that deletion? Or was the deletion
error coincidental and caused by a memory event which just happened
to be going on at the same time? I have insufficient knowledge of
the system to be able to say either way. It's possible there are
additional logs I don't know to look at which could tell us.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [OpenStack-docs] [infra][docs] Steps to migrate docs.o.o to new AFS based server

2017-01-03 Thread Jeremy Stanley
On 2017-01-03 08:01:42 +0100 (+0100), Andreas Jaeger wrote:
[...]
> 3) Create docs-archived content so that we can still access any
>content that wasn't published recently:
>a) Resync the mirror at http://files.openstack.org/docs-old/
>b) Serve the content via docs-archived.openstack.org (see
>   https://review.openstack.org/416148 )
>c) Add external IP address
>c) Update 404 handler to point to docs-archived.
[...]

I'm a little unclear on what step "c" is supposed to be there... all
our servers already have "external" (globally routable) IP
addresses.

As for step "d" that could presumably get confusing if someone
requests a page that doesn't exist at either site, since they'll get
an error about a missing page at docs-archive.o.o instead of
docs.o.o (maybe that's fine, I don't have much insight into reader
expectations). Is the longer-term plan to review analytics for page
hits against docs-archive to figure out what else should be copied
over to docs.o.o manually, or incorporated/reintroduced into
documentation builds?
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Logs for Ask.O.o - chasing false positive spam labeling

2017-01-03 Thread Jeremy Stanley
On 2017-01-03 10:54:48 +0800 (+0800), Tom Fifield wrote:
[...]
> If you check for a POST after a GET of:
> 
> /admin/auth/user/2253/delete/
> 
> Around
> 
> 2017-02-03 02:52
> 
> from
> 
> 1.169.254.207
> 
> that should be it.

Assuming you mean 2017-01-03 then yes, I see one from that IP
address around the aforementioned time:

1.169.254.207 - - [03/Jan/2017:02:51:47 +] "GET
/admin/auth/user/2253/delete/ HTTP/1.1" 200 13733
"https://ask.openstack.org/admin/auth/user/2253/"; "Mozilla/5.0
(X11; Ubuntu; Linux x86_64; rv:50.0) Gecko/20100101
Firefox/50.0"

1.169.254.207 - - [03/Jan/2017:02:52:02 +] "POST
/admin/auth/user/2253/delete/ HTTP/1.1" 500 20828
"https://ask.openstack.org/admin/auth/user/2253/delete/";
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:50.0) Gecko/20100101
Firefox/50.0"

> I believe the OOM error comes from trying to view any post in the django
> admin interface (/admin/askbot/post/%d/ ) -- those things essentially never
> stop loading for some reason.

Makes sense. The 500 above does not coincide with any OOM event (nor
can I find anything to correlate it to in syslog, Apache error logs,
Askbot application logs, Solr/Jetty request or stderrout logs, et
cetera).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Logs for Ask.O.o - chasing false positive spam labeling

2017-01-05 Thread Jeremy Stanley
On 2017-01-02 23:03:07 + (+), Jeremy Stanley wrote:
[...]
> We're basically ready to retry the previous upgrade now that some
> issues have been identified/corrected by Marton. Change
> https://review.openstack.org/408657 to trigger it is waiting on
> https://review.openstack.org/416072 to grant him a shell account on
> the production server for improved reactivity if it fails again, so
> that he can collaborate with us more directly troubleshooting before
> we have to make the call to fix or revert.
[...]

With much credit to Marton's efforts, we upgraded Askbot on
ask.openstack.org yesterday to a much more recent version (and it's
still up now a day later with no complaints AFAIK). It's worth
rechecking for further issues similar to what you experienced
previously, to see whether they're resolved now.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ask.o.o down

2017-01-13 Thread Jeremy Stanley
On 2017-01-13 10:33:24 + (+), Marton Kiss wrote:
> You can find more details about the host here:
> http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=156
> It had a network outage somewhere, if you check the eth0, the
> traffic was zero.

Unfortunately I find no corresponding outage details listed at
https://status.rackspace.com/ nor any support tickets for the tenant
providing the instance for that service. The timeframe is
suspiciously right around when daily cron jobs would be running
(they start at 06:25 UTC) but I don't see anything in the system
logs that would indicate we ran anything that would paralyze the
system like that.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [I18n] Regarding Zanata upgrade plan to 3.9.6 with Xenial: Help is needed

2017-01-13 Thread Jeremy Stanley
On 2017-01-13 13:31:54 -0800 (-0800), Clark Boylan wrote:
> On Thu, Jan 12, 2017, at 02:36 PM, Ian Y. Choi wrote:
[...]
> > - Can I have root access to translate-dev and translate server?
> 
> This is something that can be discussed with the infra team, typically
> we would give access to someone assisting with implementation to the
> -dev server while keeping the production server as infra-root only. I
> will make sure fungi sees this.

Echoing Clark, as I really have nothing more to add... Ideally the
dev server should be identical enough to production under normal
circumstances that root access to dev is sufficient to test theories
and confirm issues. If you need logs or other similar artifacts from
the production instance, we have a dozen root admins scattered
around the globe who should be available to get those for you on
demand. If this arrangement is still inconvenient, then we can work
to improve dev/production symmetry or safely increase debugging data
availability for production as needed.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Unable to add myself to the python-redfish-core group I created

2017-01-23 Thread Jeremy Stanley
On 2017-01-23 21:10:56 +0100 (+0100), Bruno Cornec wrote:
> I'm unable to add myself to the python-redfish-core group I created.
[...]
> So I think I need help from an admin to be able to modify that group.
[...]
> I have the same issue with the other group python-redfish-release
[...]

I have added you as the initial member of both requested groups (as
the creator of the https://review.openstack.org/391593 change which
originally added them). You should now be able to add/remove other
members as needed.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Unable to add myself to the python-redfish-core group I created

2017-01-23 Thread Jeremy Stanley
On 2017-01-23 15:53:59 -0500 (-0500), Paul Belanger wrote:
> On Mon, Jan 23, 2017 at 09:10:56PM +0100, Bruno Cornec wrote:
> > Hello,
> > 
> > I'm unable to add myself to the python-redfish-core group I created.
> > 
> > When using the Web interface at
> > https://review.openstack.org/#/admin/groups/99,members the
> > fields are greyed and I cannot follow the doc at
> > https://review.openstack.org/Documentation/access-control.html
> > to add myself to the group.
> > 
> You cannot self approve yourself to a gerrit group, so in the
> example of trove-core, you need to ask the trove PTL for the
> rights.
[...]

I believe he linked the trove-core group in error. I assumed he
meant https://review.openstack.org/#/admin/groups/1648,members
instead.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] [infra] Pike PTG Etherpad

2017-01-25 Thread Jeremy Stanley
Just a heads-up I've been meaning to send for a while... as
discussed in the last month of Infra meetings we've got a pad here
for people to pitch ideas of things they want to collaborate on in
the Infra team space at the PTG on Monday and Tuesday:

https://etherpad.openstack.org/p/infra-ptg-pike

It's pretty much a free-for-all; as long as there are at least two
people who want to work together on a topic and it's Infra-related
we'll do our best to accommodate. It's also listed with all the
others so you don't need to remember the pad name:

https://wiki.openstack.org/wiki/PTG/Pike/Etherpads

I'm looking forward to seeing lots of you in a few weeks! I and a
number of other Infra team members will be around for the full week
so if there's any related discussions you want to have with your own
teams just give us a heads up and we can try to have someone with
some Infra-sense pop in and help.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] debugging post jobs

2017-01-26 Thread Jeremy Stanley
On 2017-01-26 08:45:23 +0100 (+0100), Andreas Jaeger wrote:
[...]
> Yesterday some CI systems needed a restart due to problems caused by one
> provider AFAIK and that lost the post queue. So, this might not have run
> at all.
> 
> If that change is critical, just push a new one up,

Yep, as most jobs that run in post are noncritical (since the next
change which merges will just run them again anyway) we've not made
much effort to find a way to preserve them when we have to perform
emergency restarts of Zuul. That said, if you have one which
actually _was_ critical we can reenqueue the ref for it into the
post pipeline on request.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] PTG team dinner?

2017-02-09 Thread Jeremy Stanley
On 2017-02-08 12:52:33 -0800 (-0800), Clark Boylan wrote:
[...]
> It was suggested that we could use an etherpad to get a headcount and
> available times for people. I went ahead and quickly put
> https://etherpad.openstack.org/p/2017-atl-ptg-infra-dinner together.
> Please update that if interested and then we can figure out what will
> work using that info.

Thanks for getting this idea rolling! Now I just need to find a
Hawaiian shirt... ;)
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ask.o.o down

2017-02-10 Thread Jeremy Stanley
On 2017-02-10 16:08:51 +0800 (+0800), Tom Fifield wrote:
[...]
> Down again, this time with "Network is unreachable".
[...]

I'm not finding any obvious errors on the server nor relevant
maintenance notices/trouble tickets from the service provider to
explain this. I do see conspicuous gaps in network traffic volume
and system load from ~06:45 to ~08:10 UTC according to cacti:

http://cacti.openstack.org/?tree_id=1&leaf_id=156

Skipping back through previous days I find some similar gaps
starting anywhere from 06:30 to 07:00 and ending between 07:00 and
08:00 but they don't seem to occur every day and I'm not having much
luck finding a pattern. It _is_ conspicuously close to when
/etc/cron.daily scripts get fired from the crontab so might coincide
with log rotation/service restarts? The graphs don't show these gaps
correlating with any spikes in CPU, memory or disk activity so it
doesn't seem to be resource starvation (at least not for any common
resources we're tracking).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] The new openstack.cl...@gmail.com user

2017-02-14 Thread Jeremy Stanley
On 2017-02-14 15:42:53 +0530 (+0530), Amrith Kumar wrote:
[...]
> I believe that to post reviews, one must register and sign a CLA.
[...]

Our OpenStack Individual Contributor License Agreement is about
provenance of source code contributions. Anyone can comment and even
vote on reviews without agreeing to a CLA, as reviewing is not
considered a copyrightable contribution to the source itself (even
though reviewers do include sample source code in their comments
from time to time).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Foundation mascot and logo treatments for Infra team (final version)

2017-02-14 Thread Jeremy Stanley
As some of you may recall, Heidi Joy Tretheway has been wrangling
the OpenStack Foundation's effort to solicit mascots from each
project team and commission logo artists to produce a consistent set
of treatments which will be used on the foundation's Web properties
and which can also be reused by the community as desired (the
specific licensing is still being worked out, but that is the intent
anyway).

That effort has concluded, and until there's a more official place
to obtain their logo work for all the teams the following URL has
been provided for preliminary access to the various versions and
formats:

https://www.dropbox.com/sh/hjfhn8rev8dxqya/AACFbVuOjSYiSyPHVtsoO1Kda?dl=0

The foundation has printed some stickers with this design which
they'll have for us at the PTG, and will be incorporating it into
the door sign for our room there as well.

Also, Heidi Joy asked me to convey her personal thanks to everyone
who participated in this selection and review process.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] PTG team dinner?

2017-02-16 Thread Jeremy Stanley
On 2017-02-15 14:58:29 -0800 (-0800), Clark Boylan wrote:
[...]
> Alright, with Monty's help we now have a reservation at Poor Calvin's
> for Monday the 20th at 7pm. The reservation is for 14 and under my name,
> Clark Boylan. See you there! (and at the PTG).

Clark and Monty, huge thanks for arranging this for us!
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] PTG team dinner?

2017-02-16 Thread Jeremy Stanley
On 2017-02-16 17:02:48 +0100 (+0100), Ricardo Carrillo Cruz wrote:
> Gah, too bad I won't be able to make the PTG, didn't get budget for it.
> 
> Enjoy folks, I'll miss you all!

You'll be missed! I'll make sure there's some good summaries
afterward though, and we'll attempt to coordinate some work in IRC
as well for people who want to chip in remotely on tasks that don't
need much face-to-face bandwidth. I don't know how well it'll work
out in practice, but it's worth a try anyway.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ballot for Openstack elections

2017-02-22 Thread Jeremy Stanley
On 2017-02-22 21:17:40 + (+), Henry Fourie wrote:
> I have not received a ballot email for the OpenStack Pike PTL elections.

You would have only received ballots for projects to which you
contributed _if_ there were a runoff election between multiple
candidates. Of the 5 teams who had PTL elections for Pike:

https://governance.openstack.org/election/results/pike/ptl.html#results

I don't see any indication that you contributed to any of their
deliverable repositories:

https://review.openstack.org/#/q/owner:louis.fou...@huawei.com

Can you clarify which of those 5 teams you believe should have
included you in their electorate? The election officials should be
able to double-check the rolls and see if there's any discrepancy.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ballot for Openstack elections

2017-02-22 Thread Jeremy Stanley
On 2017-02-22 22:03:53 + (+), Henry Fourie wrote:
> My contributions are to networking-sfc which is part of neutron.

Thanks! It does seem to have been officially part of Neutron at the
time of their PTL election:

http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?h=jan-2017-elections#n2152

And this contribution should have resulted in you being on the roll
for the Neutron PTL election:

https://review.openstack.org/401349

I've Cc'd the election officials so they can double-check their copy
of the rolls.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ballot for Openstack elections

2017-02-22 Thread Jeremy Stanley
On 2017-02-22 22:25:23 + (+), Cathy Zhang wrote:
> I am the project lead and core member of networking-sfc project
> which is part of Neutron. I have not received the ballot email for
> the OpenStack Neutron PTL election. Could you add us for future
> election?

Assuming your addresses did appear in the rolls for that election,
is it possible that the huawei.com mailservers rejected E-mail from
c...@cs.cornell.edu (the election polling system we use)?
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ballot for Openstack elections

2017-02-23 Thread Jeremy Stanley
On 2017-02-23 18:37:49 + (+), Cathy Zhang wrote:
> I strongly support the proposal to mandate this. To be fair, I
> think TC should mandate this across all projects. In many
> complicated and technically hard commits, co-author does not make
> any less amount of technical contribution to the commit. If just
> the owner is counted, people will start to fight for the ownership
> of a commit which is not healthy for the open source community.
> 
> For my own case, it is well known that I am the initiator and
> project lead of this networking-sfc project and have contributed a
> lot to this project on the technical side and project management
> side. I have done many reviews and approvals in this cycle and
> co-authored quite some commits. It is a surprise to me that
> co-author is not counted as technical contributor in Neutron.

The technical limitations for this in the past have been twofold:

1. Gerrit did not provide a usable API for querying arbitrary
substrings from commit messages.

2. Voters must be foundation individual members and we had no way to
query the foundation member database by contributor E-mail address.

The first is less of an issue in the version of Gerrit we're running
now and the second is a situation I'm collaborating with the
foundation's development team to attempt to resolve. In the
meantime, the solution has been that PTLs should entertain requests
from co-authors to be added to the "extra ATCs" list for their
project. I don't personally have any objection to letting change
co-authors vote in elections, we just don't (yet) have a solution to
be able to automatically verify whether they're authorized to vote
under our bylaws and charter.

Separately, there was a problem back when we used to provide free
conference passes to code contributors, where someone at a company
would submit a punctuation fix to a comment in some project, add
half a dozen of their co-workers as co-authors, and then ask for
free admission for all of them (this really happened). Relying on
PTLs to vet extra ATCs before adding them was how we mitigated this.
Now that we no longer rely directly on code contributions to decide
who should get free/discounted conference admission this issue
should hopefully be purely historical. People seem to be far less
interested in gaming elections than going to conferences (or in some
cases scalping free tickets as a money-making scheme).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Moving DIB to infra

2017-03-16 Thread Jeremy Stanley
On 2017-03-15 15:25:57 -0500 (-0500), Gregory Haynes wrote:
> I wanted to make sure everyone is aware of the intention to move the DIB
> project under the infra project team[1]. Based on the ML responses and
> some discussions with DIB contributors there seems to be a slight
> preference for moving the project under the infra project team and there
> weren't any objections to us doing so.
[...]

I'm not personally opposed to the idea, but do want to make sure
there's ample notice to the rest of the Infra team who might have
missed the earlier thread(s) on the -dev ML in case they have
previously unvoiced concerns. I'd also like to be certain the
current DIB contributors are entirely disinterested in forming a
separate official team in OpenStack as I doubt the TC would reject
such a proposal (I'd happily support it).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [Cyborg][meetbot]Could anyone help to merge the long-approved meetbot patch ?

2017-03-19 Thread Jeremy Stanley
On 2017-03-19 18:05:26 +0800 (+0800), Zhipeng Huang wrote:
> It has been another week and the patch is still not merged, should I
> register it somewhere as a reminder, like the repo rename ?
[...]

Sorry about that; I've approved it since there's nothing coming up
soon on the meeting schedule.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Moving DIB to infra

2017-03-27 Thread Jeremy Stanley
On 2017-03-27 12:36:05 -0500 (-0500), Gregory Haynes wrote:
[...]
> OK, now that we've let this topic sit on on both ML's for over a week
> (in addition to all the previous discussions) I think we can safely say
> that anyone who might have had an objection has had enough time to voice
> it.
> 
> It looks like the governance change is moving forward, so the next steps
> seem to be wait for that to merge and then work through the few items on
> the etherpad.

I've added it to the Infra meeting agenda (happens in the hour
before the TC meeting), mainly to provide for last-minute objections
and so that my code review vote on the governance change can
accurately represent the will of the Infra team.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] [infra] lists.openstack.org maintenance Friday, March 31 20:00-23:00 UTC

2017-03-28 Thread Jeremy Stanley
The Mailman listserv on lists.openstack.org will be offline for an
upgrade-related maintenance for up to 3 hours (but hopefully much
less) starting at 20:00 UTC March 31, this coming Friday. This
activity is scheduled for a relatively low-volume period across our
lists; during this time, most messages bound for the server will
queue at the senders' MTAs until the server is back in service and
so should not result in any obvious disruption.

Apologies for cross-posting so widely, but we wanted to make sure
copies of this announcement went to most of our higher-traffic
lists.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [gerrit] Fais to apply the change in project-config to gerrit

2017-04-03 Thread Jeremy Stanley
On 2017-04-03 16:35:58 +0900 (+0900), Masahito MUROI wrote:
> I've pushed the change[1] to project-config repo and it's already been
> merged. However, the change fails to be applied to gerrit board. I heard the
> reason of the failure is some bugs happend in infra.

It looks applied to me at this point. We corrected some recent
regressions introduced by a new caching implementation in the
manage-projects script which applies those ACLs, and it looks like
your change merged on March 10 when this was definitely still a
problem:

https://review.openstack.org/442940

The change seems to have finally been pushed into Gerrit last
Thursday:


https://review.openstack.org/gitweb?p=openstack/blazar.git;a=commitdiff;h=c72744a

...which is when Monty reran our manage-projects script with a
cleared cache:


http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-03-30.log.html#t2017-03-30T15:25:35

> Where should I track or report the failure? I couldn't fine the launchpad.
[...]

Task and defect tracking for Infra deliverables are managed on
storyboard.openstack.org, for example openstack-infra/project-config
is:

https://storyboard.openstack.org/#!/project/731

But in this case the issue is (we think) already solved, so I
wouldn't bother filing a defect report about it at this point.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [nodepool] unable to get a floating IP

2017-04-03 Thread Jeremy Stanley
On 2017-04-03 17:21:24 +0200 (+0200), Alex Burmashev wrote:
> Yeah, i thought about it, but i think automatic IP assigning is only
> supported with nova-network, and i use neutron.
> 
> Moreover at least for some time nodepool definitely was assigning IPs to the
> VMs, it is starting, it is mentioned in docs, maillists and irc discussions.
> Maybe at some moment automatic floating IP assigning on a cloud became a
> must-have, but it is not mentioned in nodepool/openstack CI docs anywhere...
[...]

It's nuanced and probably confusing to track down the appropriate
documentation. Since nodepool uses os-client-config to infer network
needs from your provider, your clouds.yaml file may need to
explicitly set nat_destination on one of your networks (for example
if none or perhaps more than one have gateway_ip set in their
metadata):

https://docs.openstack.org/developer/os-client-config/network-config.html

The os-client-config library has a bunch of default profiles for
popular public OpenStack providers, but if you've built your own or
are using one it doesn't know about yet it's possible there's a
difference in configured behavior from what it expects to be able to
guess when and where to add floating IPs.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [nodepool] unable to get a floating IP

2017-04-03 Thread Jeremy Stanley
On 2017-04-03 19:32:42 +0200 (+0200), Alex Burmashev wrote:
[...]
> Now i have to figure out why private IP is considered as public...
[...]

There are a number of potential reasons, and the logic behind that
decision is encapsulated in the shade.meta.get_server_external_v4()
function:

http://git.openstack.org/cgit/openstack-infra/shade/tree/shade/meta.py?h=1.19.0#n113

-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] [infra] Gerrit maintenance Friday, April 21, 20:00-21:00 UTC

2017-04-19 Thread Jeremy Stanley
The Infra team will be taking the Gerrit service on
review.openstack.org offline briefly between 20:00 and 21:00 UTC
this Friday, April 21 to perform some pending renames of Git
repositories. We typically also take down the Zuul scheduler for our
CI system at the same time to avoid unfortunate mishaps (and
reenqueue testing for any active changes once we're done).

The actual downtime shouldn't span more than a few minutes since
most of the work can now happen with our systems up and running, but
replication to git.openstack.org and github.com will lag while
Gerrit is reindexing so any activities sensitive to that (such as
approving new release tags) should be performed either prior to the
start of the maintenance window or not until after midnight UTC just
to err on the side of caution.

As always, feel free to reply to this announcement, reach out to
us on the openstack-infra@lists.openstack.org mailing list or in the
#openstack-infra IRC channel on Freenode if you have any questions.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Removal of infra-root shell accounts from nodepool DIBs

2017-04-19 Thread Jeremy Stanley
On 2017-04-19 17:49:13 -0400 (-0400), Paul Belanger wrote:
[...]
> We'll now be using ansible-role-cloud-launcher[2] to populate the
> infra-root-keys keypair for all our clouds. This means that glean will then
> inject our keypairs into the authorized_keys file for the root user.
> 
> One step closer to dropping puppet from our image build process.

Also, this brings the images we're using much closer to potential
reusability outside our CI system since people no longer need to
doctor them to remove our default admin access.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] New IRC channel for Scientific WG

2017-04-19 Thread Jeremy Stanley
On 2017-04-19 17:42:38 +0100 (+0100), Stig Telfer wrote:
> A lot of our WG’s benefit is as a forum for information sharing,
> and some members would like to chat outside of the AOB slots in
> our meetings.
> 
> Would it be possible to create an IRC channel for our WG, and if
> so what should I do to see it through?

The basic commands you need for channel registration are documented
here: https://docs.openstack.org/infra/system-config/irc.html#access

After that, you'll want to propose a change to add the channel to
our "accessbot" configuration we use for general access
normalization across official OpenStack IRC channels:
https://docs.openstack.org/infra/system-config/irc.html#accessbot

And another change adding it to our "meetbot" config to enable
channel logging:
https://docs.openstack.org/infra/system-config/irc.html#logging

If you need any help proposing those changes into Gerrit, we're easy
to find in the #openstack-infra channel on Freenode too and always
happy to help out.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Problem with OpenStack bot? Vitrage meeting was not recorded

2017-04-20 Thread Jeremy Stanley
On 2017-04-20 10:37:27 + (+), Jens Rosenboom wrote:
[...]
> It seems that gerritbot for some unknown reason saw and recorded the
> "#endmeeting", but did not act on it.

For one of the most common reasons, in fact: the #startmeeting was
issued by ifat_afek_ but the #endmeeting by ifat_afek (no trailing
underscore), who was not a meeting chair. The meetbot is not
designed to automatically track nick changes over the course of a
meeting.

> The meeting was ended a couple of hours later when someone tried
> to start a new meeting and the bot advised to do "#endmeeting"
> first.
[...]

After 60 minutes from the time of the #startmeeting, the meetbot
will accept a #endmeeting from anyone, even if they're not chairing.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-04-27 Thread Jeremy Stanley
On 2017-04-27 20:47:58 -0400 (-0400), Paul Belanger wrote:
[...]
> Please take a moment to reply, and which day may be better for you.

Sunday: Yes
Monday: Yes
Tuesday: No
Wednesday: Yes
Thursday: Yes

> And, if you have a resturant in mind, please share.

There are so many great places to eat, even if we just limit it to
the Back Bay and Fenway vicinity... but I'm guessing we have locals
who can make better suggestions than I could ever hope to provide.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-04-29 Thread Jeremy Stanley
On 2017-04-29 08:29:58 +0200 (+0200), Yolanda Robla Mota wrote:
> Unfortunately I won't be coming to the summit this time... Enjoy!
[...]

We'll miss you!
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-05 Thread Jeremy Stanley
On 2017-05-04 11:30:57 -0400 (-0400), Paul Belanger wrote:
[...]
> date and time is 8:00pm on Monday for http://thesaltypig.com/.
> 
> I suggest maybe we meet at the summit mixer and walk over to the
> restaurant together.
[...]

Sounds great--thanks for organizing!
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Boston 2017 Summit dinner

2017-05-09 Thread Jeremy Stanley
On 2017-05-09 10:16:28 -0400 (-0400), Paul Belanger wrote:
> Thanks to everybody who turned out last night. Apologies we had to
> split off the some people from the main table. Hopefully everybody
> still had an awesome time!

It was awesome, thanks for setting it up (and thanks to RH and OSF
for picking up the tab)!
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] There is a problem while fallowing Project creator`s guide

2017-05-10 Thread Jeremy Stanley
On 2017-05-09 17:18:48 -0400 (-0400), Jea-Min Lim wrote:
[...]
> This doc says ‘Once you have PyPI credentials visit https://pypi.python.org/
> pypi?%3Aaction=submit_form and fill in only the required fields.’
> 
> But I can`t see ‘the required fields.'
[...]

I have a pending update for that section of the document which
should correct it once approved:

https://review.openstack.org/461498

See if the updated text in that patch works for you.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] There is a problem while fallowing Project creator`s guide

2017-05-11 Thread Jeremy Stanley
On 2017-05-11 15:23:07 -0400 (-0400), Doug Hellmann wrote:
[...]
> Oh, joy. So someone has to actually upload a release before they can
> give openstackci permission to do the same?

The PKG-INFO upload Web form works without a complete package, and
that way you can still grant permission on it.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [openstack-infra][jenkins-job-builder] Jenkins-job-builder stable release

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 09:37:35 +0200 (+0200), Ben Fox-Moore wrote:
> Is the 2.0.0.0b2 release of JJB considered as a stable release? PyPi has
> been updated to reference it, but there's still a branch labelled
> 'stable' which suggests that master isn't.

PyPI is a bit misleading here. It can either be configured to
display _all_ releases by default (allowing you to manually go
through and "hide" things like beta releases) or it can be
configured to display only the most recent upload and hide all
earlier uploads (what we have set currently). This is suboptimal in
the case of non-chronological release ordering, e.g. when releasing
different series from different branches.

In this case the JJB dev team are working toward a 2.0.0 release
from the master branch which brings in a lot of new features and
also some backward incompatibilities (hence the major version bump a
la SemVer), of which the 2.0.0.0b2 release is the most recent
beta-test version you can try out. Because of the lengthy beta
period, it was necessary to backport some critical fixes to the last
production release (1.6.0) without introducing new backward
incompatibilities so the stable/1.6 branch was created and
subsequent 1.6.x releases tagged there (1.6.2 reflecting the current
state of that branch).

> If it's not, could a new release be made, as there have been a lot of
> changes since 1.6.2.

If there are critical bug fixes you need on top of 1.6.2 we can try
to get those backported and a new 1.6.3 release tagged to
incorporate them. If you need new features only present in master
slated for the coming 2.0.0 release and don't need to worry about
upgrading existing configuration to accommodate the changes it's
bringing, then it's probably okay to try the beta versions (it looks
like there's been quite a few new commits in master over the past
couple months, so maybe it's time for either 2.0.0.0b3 or even
2.0.0.0rc1?).

All that said, I'm not really an active developer on JJB so the dev
team members on it may have other/better advice for you once they
see this ML thread.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] [infra] lists.openstack.org maintenance Friday, May 26 20:00-21:00 UTC

2017-05-24 Thread Jeremy Stanley
The Mailman listserv on lists.openstack.org will be offline for an
archive-related maintenance for up to an hour starting at 20:00 UTC
May 26, this coming Friday. This activity is scheduled for a
relatively low-volume period across our lists; during this time,
most messages bound for the server will queue at the senders' MTAs
until the server is back in service and so should not result in any
obvious disruption.

Apologies for cross-posting so widely, but we wanted to make sure
copies of this announcement went to most of our higher-traffic
lists.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul v3: proposed new Depends-On syntax

2017-05-24 Thread Jeremy Stanley
On 2017-05-24 16:04:20 -0700 (-0700), James E. Blair wrote:
[...]
> How does that sound?

I ain't afraid of no ghosts. Let's do it!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul v3: proposed new Depends-On syntax

2017-05-24 Thread Jeremy Stanley
On 2017-05-25 00:33:10 + (+), Tristan Cacqueray wrote:
> On May 24, 2017 11:04 pm, James E. Blair wrote:
> [...]
> >How does that sound?
> 
> Thinking about further connections support, could this also works
> for a (theorical) mail based patch cross dependency?

I can imagine wanting something like:

Depends-On: https://lkml.org/lkml/diff/2017/5/24/668/1

...where zuul will git am what it finds there.

> What's the logic to match the Depends-On syntax to a connection
> driver?

This brings up an additional question: how far can we stretch the
concept of a connection driver, and does it always need to be
something we can report back on in some way? Maybe we want to test a
Gerrit change or GutHub PR which requires a patch from the LKML (as
above) when rebuilding the kernel for a guest, but we don't ever
expect to report back to the LKML about anything. Would having a
generic HTTP(S) driver for things like this make sense?
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul v3: proposed new Depends-On syntax

2017-05-25 Thread Jeremy Stanley
On 2017-05-25 08:50:14 -0500 (-0500), Kevin L. Mitchell wrote:
> Can I suggest that, for OpenStack purposes, we also deploy some sort of
> bot that comments on reviews using the old syntax, to at least alert
> developers to the pending deprecation?  If it had the smarts to guess
> URLs to place in the Depends-On footer, that'd be even better.

That's pretty doable as a Gerrit hook or standalone event stream
consuming daemon. Pretty low-hanging fruit if anyone wants to
volunteer to code that up.

Alternative/complimentary idea, Gerrit hooks can also be used to
reject uploads, so when the time comes to stop supporting the old
syntax we can also see about rejecting new patchsets which are using
the then-unsupported format (as long as the error can be clearly
passed through the likes of git-review so users aren't too
confused).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul v3: proposed new Depends-On syntax

2017-05-25 Thread Jeremy Stanley
On 2017-05-25 08:10:33 -0700 (-0700), James E. Blair wrote:
> Jeremy Stanley  writes:
[...]
> > Alternative/complimentary idea, Gerrit hooks can also be used to
> > reject uploads, so when the time comes to stop supporting the old
> > syntax we can also see about rejecting new patchsets which are using
> > the then-unsupported format (as long as the error can be clearly
> > passed through the likes of git-review so users aren't too
> > confused).
> 
> Yes, though it's also possible we may want to have Zuul itself leave
> such messages.

True, if there's no concern about extending zuul to have a new fail
mode which reports back to Gerrit.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Space for Sahara artifacts (disk images)

2017-05-26 Thread Jeremy Stanley
On 2017-04-28 14:39:51 +0200 (+0200), Luigi Toscano wrote:
> The Sahara project has been providing pre-built images containing
> the Hadoop/ Spark/$bigdata frameworks since the beginning of the
> project, so that users can be immediately productive.
> 
> The generated qcow2 images have been living so far here:
> http://sahara-files.mirantis.com/images/upstream/
> 
> As a team we were wondering whether we could store those images on
> some shared and publicly accessible space on openstack.org (like
> tarballs.openstack.org).
> 
> I guess that the main concern could be the disk usage. Currently
> the space used for the older releases (from kilo to newton) is
> around ~110GB. The estimate for Ocata is ~35GB and the number is
> going to grow. Of course we can drop old images when a certain
> release reaches its end-of- life (unless there is a place to store
> some archived artifacts).
> 
> About the update frequency: the images are currenctly rebuilt with
> with every commit in sahara-image-elements (and soon in sahara
> with a different build method) by the tests. I don't think that we
> would need to update the images in this stored space with every
> commit, but at most once every month or, even better, when a new
> release of sahara-image-elements is tagged.
> 
> Please note that we already store some artifacts on
> tarballs.openstack.org, even if their size is not definitely not
> the same of those disk images.
> https://review.openstack.org/#/c/175395/
> https://review.openstack.org/#/c/367271/
> 
> To summarize: would it be possible for us to use some shared
> space, and if yes, which are the conditions?

Apologies for the delay in responding. We don't currently have
sufficient free space to store the quantity of data you're talking
about (some projects like Ironic, Trove and Kolla do or have in the
past uploaded guest images there, but those are far fewer and much
smaller than what you're requesting). We can see about extending the
available free space for tarballs.openstack.org after we relocate it
off the same server where we store our job logs, but we don't have a
timeline for when that will be. I'm sorry I don't have better news
on that front.

On a separate note, what degree of security support is being
provided for those images (as far as known vulnerabilities in
non-OpenStack software aggregated within them)? There is still some
concern expressed within the community around producing images of
this nature for any purpose other than use within our CI system, in
which case long-term archiving of release versions of those images
is unnecessary. If you need a periodic job to upload and replace
pregenerated branch-tip snapshot images for consumption by other CI
jobs, we should be able to work something out pretty easily (that's
what the other projects I mentioned above have been doing).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Space for Sahara artifacts (disk images)

2017-05-26 Thread Jeremy Stanley
On 2017-05-26 12:34:44 +0400 (+0400), Evgeny Sikachev wrote:
> I found the project which is using tarballs for storing images.
> https://tarballs.openstack.org/trove/images/
> 
> We would like to use the same space for storing sahara-images if
> it is possible.
[...]

It sounded from the previous E-mail like the interest was in
long-term publication of release images, while what you're linking
there are periodically refreshed branch-tip snapshots for
consumption within CI jobs (technically the Trove team has ceased
using those, but Ironic and Kolla are still doing something similar
to that). The use cases, storage needs and security/support concerns
are vastly different.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul v3: proposed new Depends-On syntax

2017-05-29 Thread Jeremy Stanley
On 2017-05-29 13:34:47 +1000 (+1000), Joshua Hesketh wrote:
[...]
> We could extend the 'start message' of zuul to explain what it is
> about to do. eg: "Testing change XYZ against $branch, with change
> XYZ applied to $project $branch, with..." etc.

This would be hard (I think for definitions of hard roughly equal to
impossible) for dependent pipelines since the non-declared
dependencies which will be tested in the pile can't be guessed in
advance (some may be kicked out, some may come between declared
dependencies, et cetera).

> This could alternatively go into a "state" file (of sorts) in the
> zuul logs/output. ie, summarise what the zuul-cloner did into
> simple terms that doesn't involve large logs.

I'm assuming you mean zuul-merger unless zuul-cloner is growing to
replace zuul-merger in v3 or something? Anyway, I agree a general
logset per changeish might be nice in addition to per-job logs and
this information might fit well somewhere there (or copied into each
build result).
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Zuul V3: Behavior of change related requirements on push like events

2017-05-30 Thread Jeremy Stanley
On 2017-05-30 12:53:15 -0700 (-0700), Jesse Keating wrote:
[...]
> Github labels: This is like approvals/reviews.
[...]

Perhaps an interesting aside, Gerrit uses the same term (labels) for
how we're doing approvals and review voting.

> Personally, my opinions are that to avoid confusion, change type
> requirements should always fail on push type events. This means
> open, current-patchset, approvals, reviews, labels, and maybe
> status requirements would all fail to match a pipeline for a push
> type event. It's the least ambiguous, and promotes the practice of
> creating a separate pipeline for push like events from change like
> events. I welcome other opinions!

This seems like a reasonable conclusion to me.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Space for Sahara artifacts (disk images)

2017-06-09 Thread Jeremy Stanley
On 2017-05-30 12:49:37 +0200 (+0200), Luigi Toscano wrote:
> On Friday, 26 May 2017 15:27:02 CEST Jeremy Stanley wrote:
[...]
> > We can see about extending the available free space for
> > tarballs.openstack.org after we relocate it off the same server
> > where we store our job logs, but we don't have a timeline for
> > when that will be. I'm sorry I don't have better news on that
> > front.
> 
> It wouldn't be a problem to wait a bit. Even if you don't have a timeline, do 
> you know the approximate timing? Is it more 6 months, one year, or more?
[...]

Aside from it being something we've talked about probably wanting
(putting tarballs content in AFS and being able to present a local
cache to workers in each provider/region we use for our CI system),
I don't think it's on anyone's radar yet from an execution
perspective. We've had more recent discussions about relocating our
logs.openstack.org site off that same server and into another
service provider... doing that would also free up plenty of space
for growing the current tarballs site and may well happen sooner
(though again I have no solid timeline for that work, we need a spec
for things like the 404->301 forwarding configuration on whichever
end has the DNS record while waiting out the retention period).
Rough guess is at least a few months given our current team
throughput and priorities.

> > If you need a periodic job to upload and replace pregenerated
> > branch-tip snapshot images for consumption by other CI jobs, we
> > should be able to work something out pretty easily (that's what
> > the other projects I mentioned above have been doing).
> 
> This is not the use case described in my original request.
> Nevertheless, it could be useful for some of our scenario jobs.
> But wouldn't this be constrained by the lack of space as well?

Based on the numbers you gave, we could fairly confidently provide
sufficient space for images built from the tips of supported
branches since they would be replaced rather than accumulating new
ones for each tag. Hosting image sets for every point release
requires a fair amount more available space by comparison.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] About aarch64 third party CI

2017-06-09 Thread Jeremy Stanley
On 2017-06-07 14:26:10 +0800 (+0800), Xinliang Liu wrote:
[...]
> we already have our own pre-built debian cloud image, could I just
> use it and not use the one built by diskimage-builder?
[...]

The short answer is that nodepool doesn't currently have support for
directly using an image provided independent of its own image build
process. Clark was suggesting[*] in IRC today that it might be
possible to inject records into Zookeeper (acting as a "fake"
nodepool-builder daemon basically) to accomplish this, but nobody
has yet implemented such a solution to our knowledge.

Longer term, I think we do want a feature in nodepool to be able to
specify the ID of a prebuilt image for a label/provider (at least we
discussed that we wouldn't reject the idea if someone proposed a
suitable implementation). Just be aware that nodepool's use of
diskimage-builder to regularly rebuild images is intentional and
useful since it ensures images are updated with the latest packages,
kernels, warm caches and whatever else you specify in your elements
so reducing job runtimes as they spend less effort updating these
things on every run.

[*] http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-06-09.log.html#t2017-06-09T15:32:27-2
 >
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Request to EOL Puppet OpenStack Mitaka

2017-06-16 Thread Jeremy Stanley
On 2017-06-14 08:18:38 -0400 (-0400), Emilien Macchi wrote:
> On behalf of Puppet OpenStack team, I would like to request a
> mitaka-eol tag and a stable/mitaka branch removal for the following
> git repos:
[...]

Is tagging not going to be handled by the Puppet team or Stable
Branch team? At the moment (at least until we upgrade Gerrit again)
the Infra team still needs to delete branches, but ACLs for pushing
signed tags can certainly be added however you need.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Request to EOL Puppet OpenStack Mitaka

2017-06-16 Thread Jeremy Stanley
On 2017-06-16 09:58:44 -0400 (-0400), Emilien Macchi wrote:
[...]
> On Wed, Jun 14, 2017 at 8:18 AM, Emilien Macchi  wrote:
[...]
> > puppet-aodh
> > puppet-ceilometer
> > puppet-cinder
> > puppet-designate
> > puppet-glance
> > puppet-gnocchi
> > puppet-heat
> > puppet-horizon
> > puppet-ironic
> > puppet-keystone
> > puppet-manila
> > puppet-mistral
> > puppet-murano
> > puppet-neutron
> > puppet-nova
> > puppet-openstack_extras
> > puppet-openstacklib
> > puppet-sahara
> > puppet-swift
> > puppet-tempest
> > puppet-trove
> > puppet-vswitch
> > puppet-zaqar
[...]

It looks like these are covered by Tony's request to the dev ML:

  http://lists.openstack.org/pipermail/openstack-dev/2017-June/118473.html

Please confirm you don't need any adjustments there.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Planet feed on the blink

2017-06-20 Thread Jeremy Stanley
On 2017-06-20 18:07:54 +0100 (+0100), Stig Telfer wrote:
> Can anyone help me with restoring our blog feed on
> planet.openstack.org?  Our blog ("StackHPC team blog") is not
> getting syndicated.  In the planet.openstack.org page source, it's
> tagged with "internal server error" - is that something we can fix
> or the result of a transient outage, or…?

It appears that planet is unable to connect to the HTTPS URL you've
supplied because https://www.stackhpc.com/ is using an X.509 cert
issued by "Let's Encrypt Authority X3" but is not supplying an
appropriate certificate chain up to a well-known authority trusted
by Ubuntu 16.04 (note some browsers, e.g. recent Firefox releases,
may include that cert directly in their trust set but many
command-line tools like wget/curl or other browsers still may not):

https://www.ssllabs.com/ssltest/analyze.html?d=www.stackhpc.com

"This server's certificate chain is incomplete."

You likely need to configure your server to append the active
intermediate CA certificates linked at:

https://letsencrypt.org/certificates/

> It seems like there are 26 blog feeds currently in this state
> (ours has been like it for a few weeks at least).

I haven't checked them all exhaustively (if someone wants to
volunteer to clean up the planet config I'm happy to supply a copy
of the log from the latest run to aid in that effort), but among the
many HTTP not-found, database/internal server error responses, DNS
no-such-host and TCP connection timeout failures I have also found a
few more with similar HTTPS misconfigurations (though none so far
with certs issued by the same CA as yours).

> Is this a known issue, and what needs doing to fix it?

I would classify missing chain certs as a known issue, but one
you'll need to address on your end. Alternatively, you could switch
to using an http:// scheme in the planet config for your
syndication since you're apparently not unilaterally redirecting all
HTTP requests to HTTPS.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Puppet 4, beaker jobs and the future of our config management

2017-06-20 Thread Jeremy Stanley
A couple weeks ago during our June 6 Infra team meeting,
discussion[1] about the state of our Ansible Puppet Apply spec[2]
morphed into concerns over the languishing state of our Beaker-based
Puppet module integration test jobs, work needed to prepare for
Puppet 4 now that Puppet 3 is EOL upstream[3] for the past 6 months,
and the emergence of several possibly competing/conflicting approved
and proposed Infra specs:

  * Puppet Module Functional Testing[4]
  * Puppet 4 Preliminary Testing[5]
  * Rename and expand Puppet 4 Preliminary Testing[6]
  * Ansiblify control plane[7]

As the discussion evolved, unanswered questions were raised:

  1. What are we going to do to restore public reporting?

  2. Should we push forward with the changes needed to address
 bitrot on the nonvoting Beaker-based integration jobs so we can
 start enforcing them on new changes to all our modules?

  3. Is the effort involved in upfitting our existing modules to
 Puppet 4 worth the effort compared to trying to replace Puppet
 with Ansible (a likely contentious debate lurking here) which
 might attract more developer/reviewer focus and interest?

The meeting was neither long enough nor an appropriate venue for
deciding these things, so I agreed to start a thread here on the ML
where we might be able to hash out our position on them a little
more effectively and inclusive of the wider community involved.
Everyone with a vested interest is welcome to weigh in, of course.

[1] 
http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-06-06-19.03.log.html#l-24
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/ansible_puppet_apply.html
[3] https://voxpupuli.org/blog/2016/12/22/putting-down-puppet-3/
[4] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/puppet-module-functional-testing.html
[5] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/puppet_4_prelim_testing.html
[6] https://review.openstack.org/449933
[7] https://review.openstack.org/469983
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Request to EOL Puppet OpenStack Mitaka

2017-06-21 Thread Jeremy Stanley
On 2017-06-21 09:03:47 -0400 (-0400), Emilien Macchi wrote:
[...]
> Can anyone have a look please?
> We would like to EOL Puppet OpenStack Mitaka.

It's not forgotten, Infra's just spread thin and some weeks
(especially when lots of people go to conferences) it's about all we
can do to keep the lights on. For larger non-urgent requests like
this to be deprioritized and take more than a week to address is,
unfortunately, not atypical.

Making matters worse, I'll be mostly (if not completely) away from
the computer for the next five days, so unless someone else gets to
it before then I can't really commit to it until I'm around again
either.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] OpenStack jobs board not posting new submissions

2017-07-10 Thread Jeremy Stanley
On 2017-07-10 10:30:01 +0100 (+0100), Stig Telfer wrote:
> About a week ago we submitted a new job to www.openstack.org/jobs
> and it hasn’t yet been posted online.  It appears the last job to
> get posted was 21st June.
> 
> Is someone able to check what’s going on with the jobs board, or
> is this not in your domain?

The www.openstack.org site is not under the care of the community
infrastructure sysadmins. You can notify the OpenStack Foundation
Web devs by filing a bug report here:

https://bugs.launchpad.net/openstack-org/+filebug

-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Add member to fuel-plugin-fortinet-core group

2017-07-13 Thread Jeremy Stanley
On 2017-07-12 13:34:51 -0700 (-0700), Jerry Zhao wrote:
> Could you please add me in  fuel-plugin-fortinet-core group. I am the
> bootstrapper of the project so hopefully in fuel-plugin-fortinet-release as
> well.
[...]

Seems this was added by https://review.openstack.org/326091 for
which you were the change author, so this request is sufficient.

I have now added you to both groups.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] IMPORTANT upcoming change to technical elections

2017-07-17 Thread Jeremy Stanley
I just posted an announcement to openstack-dev which is also
relevant to some subscribers of this ML who may not see it there:

http://lists.openstack.org/pipermail/openstack-dev/2017-July/119786.html

If possible, please follow up there with any questions.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [infra][nova] Corrupt nova-specs repo

2017-07-17 Thread Jeremy Stanley
On 2017-06-30 16:11:42 +1000 (+1000), Ian Wienand wrote:
> Unfortunately it seems the nova-specs repo has undergone some
> corruption, currently manifesting itself in an inability to be pushed
> to github for replication.
[...]
> So you may notice this is refs/changes/26/463526/[2-9]
> 
> Just deleting these refs and expiring the objects might be the easiest
> way to go here, and seems to get things purged and fix up fsck
[...]

This plan seems reasonable to me. I can't personally think of any
alternatives and if someone else here knows of some arcane git
repair wizardry you haven't tried, they haven't chimed in to suggest
it either.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

  1   2   3   4   5   6   7   >