Re: [Twisted-Python] Twisted 16.3.1 Prerelease 1 Announcement

2016-08-15 Thread Tristan Seligmann
On Wed, 10 Aug 2016 at 14:48 Amber "Hawkie" Brown 
wrote:

> I've just uploaded the first prerelease of Twisted 16.3.1, a security &
> critical bug fix release of the 16.3 series. It contains:
>

FWIW my $DAYJOB test suite passes against 16.3.1rc1.
___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] Twisted 16.4.0 Prerelease 1 Announcement

2016-08-15 Thread Tristan Seligmann
On Sat, 13 Aug 2016 at 08:19 Amber "Hawkie" Brown 
wrote:

> In a rare Twisted release double feature right after 16.3.1, I bring you
> the first prerelease of Twisted 16.4.0.
>

FWIW my $DAYJOB test suite passes against 16.4.0rc1.
___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] Waiting time for tests running on Travis CI and Buildbot

2016-08-15 Thread Jean-Paul Calderone
On Sun, Aug 14, 2016 at 7:10 PM, Glyph Lefkowitz 
wrote:

>
> > On Aug 14, 2016, at 3:38 AM, Adi Roiban  wrote:
> >
> > If you think that we can raise $6000 per year for sponsoring our
> > Travis-CI and that is worth increasing the queue size I can follow up
> > with Travis-CI.
>
> I think that this is definitely worth doing.
>

How might we guess this would compare to $6000/year development effort
spent towards  speeding up the test suite?

Jean-Paul
___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] Waiting time for tests running on Travis CI and Buildbot

2016-08-15 Thread Adi Roiban
On 15 August 2016 at 00:10, Glyph Lefkowitz  wrote:
>
>> On Aug 14, 2016, at 3:38 AM, Adi Roiban  wrote:
>>
>> Hi,
>>
>> We now have 5 concurrent jobs on Travis-CI for the whole Twisted 
>> organization.
>>
>> If we want to reduce the waste of running push tests for a PR we
>> should check that the other repos from the Twisted organization are
>> doing the same.
>>
>> We now have in twisted/twisted 9 jobs per each build ... and for each
>> push to a PR ... we run the tests for the push and for the PR merge...
>> so those are 18 jobs for a commit.
>>
>> twisted/mantisa has 7 jobs per build, twisted/epsilon 3 jobs per
>> build, twisted/nevow 14 jobs, twisted/axiom 6 jobs, twisted/txmongo 16
>> jobs
>>
>>  so we are a bit over the limit of 5 jobs
>
> Well, we're not "over the limit".  It's just 5 concurrent.  Most of the 
> projects that I work on have more than 5 entries in their build matrix.
>
>> I have asked Travis-CI how we can improve the waiting time for
>> twisted/twisted jobs and for $6000 per year they can give us 15
>> concurrent jobs for the Twisted organization.
>>
>> This will not give us access to a faster waiting line for the OSX jobs.
>>
>> Also, I don't think that we can have twisted/twisted take priority
>> inside the organization.
>>
>> If you think that we can raise $6000 per year for sponsoring our
>> Travis-CI and that is worth increasing the queue size I can follow up
>> with Travis-CI.
>
> I think that this is definitely worth doing.

Do we have the budget for this or we need to do a fundraising drive?
Can The Software Freedom Conservancy handle the payment for Travis-CI?

Even if we speed up the build time, with 5 jobs we would still have
only 0.5 concurrent complete builds ... or even less.
So I think that increasing to 15 jobs would be needed anyway.

>> I have also asked Circle CI for a free ride on their OSX builders, but
>> it was put on hold as Glyph told me that Circe CI is slower than
>> Travis.
>>
>> I have never used Circle CI. If you have a good experience with OSX on
>> Circle CI I can continue the phone interview with Circle Ci so that we
>> get the free access and see how it goes.
>
> The reason I'm opposed to Circle is simply that their idiom for creating a 
> build matrix is less parallelism-friendly than Travis.  Travis is also more 
> popular, so more contributors will be interested in participating.
>

OK. No problem. Thanks for the feedback.
I also would like to have less providers as we already have
buildbot/travis/appveyor :)

My push is for a single provider -> buildbot ... but I am aware that
it might not be feasible

>> There are multiple ways in which we can improve the time a test takes
>> to run on Travis-CI, but it will never be faster than buildbot with a
>> slave which is always active and ready to start a job in 1 second and
>> which already has 99% of the virtualev dependencies already installed.
>
> There's a lot that we can do to make Travis almost that fast, with pre-built 
> Docker images and cached dependencies.  We haven't done much in the way of 
> aggressive optimization yet.  As recently discussed we're still doing twice 
> as many builds as we need to just because we've misconfigured branch / push 
> builds :).

Hm... pre-built dockers also takes effort to keep them updated... and
then we will have a KVM VM starting inside a docker in which we run
the tests...

...and we would not be able to test the inotify part.

... and if something will go wrong and  we need debugging on the host
I am not sure how fun is to debug this.

>> AFAIK the main concern with buildot, is that the slaves are always
>> running so a malicious person could create a PR with some malware and
>> then all our slaves will execute that malware.
>
> Not only that, but the security between the buildmaster and the builders 
> themselves is weak.  Now that we have the buildmaster on a dedicated machine, 
> this is less of a concern, but it still has access to a few secrets (an SSL 
> private key, github oauth tokens) which we would rather not leak if we can 
> avoid it.

If we have all slaved in RAX and Azure I hope that communication
between slaves and buildmaster is secure.

github token is only for publishing the commit status ... and I hope
that we can make that token public :)

I have little experience with running public infrastructures for open
source projects... but are there that many malicious people which
would want to exploit a github commit status only token?

>> One way to mitigate this, is to use latent buildslaves and stop and
>> reset a slave after each build, but this will also slow the build and
>> lose the virtualenv ... which of docker based slave should not be a
>> problem... but if we want Windows latent slaves it might increase the
>> build time.
>
> It seems like fully latent slaves would be slower than Travis by a lot, since 
> Travis is effectively doing the same thing, but they have a massive economy 
> of scale with pre-warmed pre-booted VMs th

Re: [Twisted-Python] Waiting time for tests running on Travis CI and Buildbot

2016-08-15 Thread Glyph Lefkowitz

> On Aug 15, 2016, at 3:55 AM, Jean-Paul Calderone  
> wrote:
> 
> How might we guess this would compare to $6000/year development effort spent 
> towards  speeding up the test suite?

An excellent point :).

Although both are somewhat moot if we don't do some fundraising...

-glyph___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] Waiting time for tests running on Travis CI and Buildbot

2016-08-15 Thread Glyph Lefkowitz

> On Aug 15, 2016, at 6:06 AM, Adi Roiban  wrote:
> 
> On 15 August 2016 at 00:10, Glyph Lefkowitz  wrote:
>> 
>>> On Aug 14, 2016, at 3:38 AM, Adi Roiban  wrote:
>>> 
>>> Hi,
>>> 
>>> We now have 5 concurrent jobs on Travis-CI for the whole Twisted 
>>> organization.
>>> 
>>> If we want to reduce the waste of running push tests for a PR we
>>> should check that the other repos from the Twisted organization are
>>> doing the same.
>>> 
>>> We now have in twisted/twisted 9 jobs per each build ... and for each
>>> push to a PR ... we run the tests for the push and for the PR merge...
>>> so those are 18 jobs for a commit.
>>> 
>>> twisted/mantisa has 7 jobs per build, twisted/epsilon 3 jobs per
>>> build, twisted/nevow 14 jobs, twisted/axiom 6 jobs, twisted/txmongo 16
>>> jobs
>>> 
>>>  so we are a bit over the limit of 5 jobs
>> 
>> Well, we're not "over the limit".  It's just 5 concurrent.  Most of the 
>> projects that I work on have more than 5 entries in their build matrix.
>> 
>>> I have asked Travis-CI how we can improve the waiting time for
>>> twisted/twisted jobs and for $6000 per year they can give us 15
>>> concurrent jobs for the Twisted organization.
>>> 
>>> This will not give us access to a faster waiting line for the OSX jobs.
>>> 
>>> Also, I don't think that we can have twisted/twisted take priority
>>> inside the organization.
>>> 
>>> If you think that we can raise $6000 per year for sponsoring our
>>> Travis-CI and that is worth increasing the queue size I can follow up
>>> with Travis-CI.
>> 
>> I think that this is definitely worth doing.
> 
> Do we have the budget for this or we need to do a fundraising drive?
> Can The Software Freedom Conservancy handle the payment for Travis-CI?

We have almost no budget, so we would definitely need to raise money.  OTOH 
sometime soon we're going to run out of money just to keep the lights on on our 
tummy.com server, so we probably need to do that anyway :).

> Even if we speed up the build time, with 5 jobs we would still have
> only 0.5 concurrent complete builds ... or even less.
> So I think that increasing to 15 jobs would be needed anyway.

Faster is better, of course, but I don't see buildbot completing all its builds 
that much faster than Travis right now, so I'm not sure why you think this is 
so critical?

>>> I have also asked Circle CI for a free ride on their OSX builders, but
>>> it was put on hold as Glyph told me that Circe CI is slower than
>>> Travis.
>>> 
>>> I have never used Circle CI. If you have a good experience with OSX on
>>> Circle CI I can continue the phone interview with Circle Ci so that we
>>> get the free access and see how it goes.
>> 
>> The reason I'm opposed to Circle is simply that their idiom for creating a 
>> build matrix is less parallelism-friendly than Travis.  Travis is also more 
>> popular, so more contributors will be interested in participating.
>> 
> 
> OK. No problem. Thanks for the feedback.
> I also would like to have less providers as we already have
> buildbot/travis/appveyor :)
> 
> My push is for a single provider -> buildbot ... but I am aware that
> it might not be feasible

Yeah I just don't think buildbot is hardened enough for this sort of thing 
(although I would be happy to be proven wrong).

>>> There are multiple ways in which we can improve the time a test takes
>>> to run on Travis-CI, but it will never be faster than buildbot with a
>>> slave which is always active and ready to start a job in 1 second and
>>> which already has 99% of the virtualev dependencies already installed.
>> 
>> There's a lot that we can do to make Travis almost that fast, with pre-built 
>> Docker images and cached dependencies.  We haven't done much in the way of 
>> aggressive optimization yet.  As recently discussed we're still doing twice 
>> as many builds as we need to just because we've misconfigured branch / push 
>> builds :).
> 
> Hm... pre-built dockers also takes effort to keep them updated... and
> then we will have a KVM VM starting inside a docker in which we run
> the tests...
> 
> ...and we would not be able to test the inotify part.

Not true:

We can have one non-containerized builder (sudo: true) for testing inotify; no 
need for KVM-in-Docker (also you can't do that without a privileged container, 
so, good thing)
Docker has multiple storage backends, and only one (overlayfs) doesn't work 
with inotify
It's work to keep buildbot updated too :)

> ... and if something will go wrong and  we need debugging on the host
> I am not sure how fun is to debug this.
> 
>>> AFAIK the main concern with buildot, is that the slaves are always
>>> running so a malicious person could create a PR with some malware and
>>> then all our slaves will execute that malware.
>> 
>> Not only that, but the security between the buildmaster and the builders 
>> themselves is weak.  Now that we have the buildmaster on a dedicated 
>> machine, this is less of a concern, but it still has access to a few sec

Re: [Twisted-Python] Waiting time for tests running on Travis CI and Buildbot

2016-08-15 Thread Jean-Paul Calderone
On Mon, Aug 15, 2016 at 4:38 PM, Glyph Lefkowitz 
wrote:

There's a lot that we can do to make Travis almost that fast, with
pre-built Docker images and cached dependencies.  We haven't done much in
the way of aggressive optimization yet.  As recently discussed we're still
doing twice as many builds as we need to just because we've misconfigured
branch / push builds :).


Hm... pre-built dockers also takes effort to keep them updated... and
then we will have a KVM VM starting inside a docker in which we run
the tests...

...and we would not be able to test the inotify part.


Not true:
>
>
>1. We can have one non-containerized builder (sudo: true) for testing
>inotify; no need for KVM-in-Docker (also you can't do that without a
>privileged container, so, good thing)
>
> I'm curious about the details of how such a configuration would work.
Since there is only one travis configuration per repository (well, per
branch, technically, don't think that makes a difference here) and the sudo
configuration is global (isn't it?), I always though either a project had
to pick sudo or not sudo and you couldn't have a mix of builds with each.

(Wonky quoting thanks to gmail's wonky web interface, sorry)

Jean-Paul
___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] Waiting time for tests running on Travis CI and Buildbot

2016-08-15 Thread Glyph Lefkowitz

> On Aug 15, 2016, at 2:11 PM, Jean-Paul Calderone  
> wrote:
> 
> On Mon, Aug 15, 2016 at 4:38 PM, Glyph Lefkowitz  > wrote:
> 
>>> There's a lot that we can do to make Travis almost that fast, with 
>>> pre-built Docker images and cached dependencies.  We haven't done much in 
>>> the way of aggressive optimization yet.  As recently discussed we're still 
>>> doing twice as many builds as we need to just because we've misconfigured 
>>> branch / push builds :).
>> 
>> Hm... pre-built dockers also takes effort to keep them updated... and
>> then we will have a KVM VM starting inside a docker in which we run
>> the tests...
>> 
>> ...and we would not be able to test the inotify part.
> 
> Not true:
> 
> We can have one non-containerized builder (sudo: true) for testing inotify; 
> no need for KVM-in-Docker (also you can't do that without a privileged 
> container, so, good thing)
> I'm curious about the details of how such a configuration would work.  Since 
> there is only one travis configuration per repository (well, per branch, 
> technically, don't think that makes a difference here) and the sudo 
> configuration is global (isn't it?), I always though either a project had to 
> pick sudo or not sudo and you couldn't have a mix of builds with each.

I had just assumed that it would be per-matrix-entry, and while it looks like 
I'm correct, it's much less obvious than I had thought.  If you look at the 
second example code block under this heading:

https://docs.travis-ci.com/user/multi-os/#Example-Multi-OS-Build-Matrix

you'll see that one of the matrix/include entries has a 'sudo: required' tag, 
which means "non-container-based" please.  Presumably you can mix those.

The docs on "migrating to the container-based infrastructure" do make it sound 
like this is impossible though, and this isn't nearly as clear as I'd like, so 
it would be nice to actually experiment and see what happens...

> (Wonky quoting thanks to gmail's wonky web interface, sorry)

It wasn't too bad - certainly well-worth suffering through for your 
contribution :).

-glyph___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


[Twisted-Python] Twisted 16.3.1 Release Announcement

2016-08-15 Thread Amber "Hawkie" Brown
On behalf of Twisted Matrix Laboratories, I am honoured to announce the release 
of Twisted 16.3.1.

This is a bug fix & security fix release, and is recommended for all users of 
Twisted. The fixes are:

- A bugfix for a HTTP/2 edge case,
- Fix for CVE-2008-7317 (generating potentially guessable HTTP session 
identifiers)
- Fix for CVE-2008-7318 (sending secure session cookies over insecured 
connections)
- Fix for CVE-2016-1000111 (http://httpoxy.org/ )

For more information, check the NEWS file (link provided below).

You can find the downloads at > (or alternatively 
>). The NEWS file is also 
available at >.

Many thanks to everyone who had a part in this release - the supporters of the 
Twisted Software Foundation, the developers who contributed code as well as 
documentation, and all the people building great things with Twisted!

Twisted Regards,
Amber Brown (HawkOwl)


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python