On Wed, Jan 24, 2018 at 11:02 PM, Tom Lane wrote:
> I may be wasting my breath here, but in one more attempt to convince
> you that "time make check" on your laptop is not the only number that
> anyone should be interested in, ...
Now that is not what I said, or at least not what I intended to sa
Robert Haas writes:
> There is no need to collect years of data in order to tell whether or
> not the time to run the tests has increased by as much on developer
> machines as it has on prairiedog. You showed the time going from 3:36
> to 8:09 between 2014 and the present. That is a 2.26x increa
On Wed, Jan 24, 2018 at 4:01 PM, Tom Lane wrote:
> The progress-display output of pg_regress would need a complete rethink
> anyhow. First thought is to emit two lines per test, one when we
> launch it and one when it finishes and we check the results:
>
> foreign_data: launched
> ...
> foreign_d
On Wed, Jan 24, 2018 at 2:31 PM, Tom Lane wrote:
> I find that to be a completely bogus straw-man argument. The point of
> looking at the prairiedog time series is just to see a data series in
> which the noise level is small enough to discern the signal. If anyone's
> got years worth of data of
Hi,
On 2018-01-24 15:58:16 -0500, Tom Lane wrote:
> Yeah. We already have topo sort code in pg_dump, maybe we could push that
> into someplace like src/common or src/fe_utils? Although pg_dump hasn't
> got any need for edge weights, so maybe sharing code isn't worth it.
I suspect it may be more
Andres Freund writes:
> On 2018-01-24 15:36:35 -0500, Tom Lane wrote:
>> I'm concerned that we'd end up with a higher number of irreproducible
>> test failures with no good way to investigate them.
> Hm. We probably should dump the used ordering of tests somwhere upon
> failure, to make it easier
Andres Freund writes:
> On 2018-01-24 17:18:26 -0300, Alvaro Herrera wrote:
>> Yeah, I proposed this a decade ago but never had the wits to write the
>> code.
> It shouldn't be too hard, right? Leaving defining the file format,
> parsing it, creating the new schedule with depencencies and adaptin
Hi,
On 2018-01-24 15:36:35 -0500, Tom Lane wrote:
> There'd be a lot of followup work to sanitize the tests better. For
> instance, if two tests transiently create tables named "foo", it doesn't
> matter as long as they're not in the same group. It would matter with
> this.
Right. I suspect we'
Hi,
On 2018-01-24 17:18:26 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
> > Besides larger groups, starting the next test(s) earlier, another way to
> > gain pretty large improvements would be a test schedule feature that
> > allowed to stat dependencies between tests. So instead of manuall
Alvaro Herrera writes:
> Andres Freund wrote:
>> Besides larger groups, starting the next test(s) earlier, another way to
>> gain pretty large improvements would be a test schedule feature that
>> allowed to stat dependencies between tests. So instead of manually
>> grouping the schedule, have 'nu
Thomas Munro wrote:
> On Wed, Jan 24, 2018 at 12:10 PM, Tom Lane wrote:
> > However, the trend over the last two months is very bad, and I do
> > not think that we can point to any large improvement in test
> > coverage that someone committed since November.
>
> I'm not sure if coverage.postgres
Andres Freund wrote:
> Besides larger groups, starting the next test(s) earlier, another way to
> gain pretty large improvements would be a test schedule feature that
> allowed to stat dependencies between tests. So instead of manually
> grouping the schedule, have 'numerology' state that it depen
Hi,
On 2018-01-24 14:31:47 -0500, Tom Lane wrote:
> However ... if you spend any time looking at the behavior of that,
> the hashjoin tests are still problematic.
I think my main problem with your arguments is that you basically seem
to say that one of the more complex features in postgres can't
I wrote:
> I find that to be a completely bogus straw-man argument. The point of
> looking at the prairiedog time series is just to see a data series in
> which the noise level is small enough to discern the signal. If anyone's
> got years worth of data off a more modern machine, and they can ext
Andres Freund writes:
> On 2018-01-24 13:11:22 -0500, Robert Haas wrote:
>> Now, how much should we care about the performance of software with a
>> planned release date of 2018 on hardware discontinued in 2001,
>> hardware that is apparently about 20 times slower than a modern
>> laptop? Some, p
Hi,
On 2018-01-24 13:11:22 -0500, Robert Haas wrote:
> So for me, the additional hash index tests don't cost anything
> measurable and the additional hash join tests cost about a second. I
> think this probably accounts for why committers other than you keep
> "adding so much time to the regressi
On 2018-01-23 14:24:56 -0500, Robert Haas wrote:
> Right, but this doesn't seem to show any big spike in the runtime at
> the time when parallel hash was committed, or when the preparatory
> patch to add test coverage for hash joins got committed. Rather,
> there's a gradual increase over time. E
On Tue, Jan 23, 2018 at 6:10 PM, Tom Lane wrote:
> Looking more closely at the shorter series, there are four pretty obvious
> step changes since 2016-09. The PNG's x-axis doesn't have enough
> resolution to match these up to commits, but looking at the underlying
> data, they clearly correspond
On Wed, Jan 24, 2018 at 12:10 PM, Tom Lane wrote:
> There is a very clear secular trend up in the longer data series,
> which indicates that we're testing more stuff,
+1
> which doesn't bother
> me in itself as long as the time is well spent. However, the trend
> over the last two months is ver
Robert Haas writes:
> On Mon, Jan 22, 2018 at 6:53 PM, Tom Lane wrote:
>> Here's a possibly more useful graph of regression test timings over
>> the last year. I pulled this from the buildfarm database: it is the
>> reported runtime for the "installcheck-C" step in each successful
>> build of HE
On Mon, Jan 22, 2018 at 6:53 PM, Tom Lane wrote:
> Here's a possibly more useful graph of regression test timings over
> the last year. I pulled this from the buildfarm database: it is the
> reported runtime for the "installcheck-C" step in each successful
> build of HEAD on dromedary, going back
On 2018-01-04 15:16:15 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2018-01-04 11:20:33 -0800, Andres Freund wrote:
> >> Some packages on skink have been upgraded. It appears that there either
> >> was a libc or valgrind change that made valgrind not recognize that a
> >> pointer of 0 mig
Andres Freund writes:
> On 2018-01-04 11:20:33 -0800, Andres Freund wrote:
>> Some packages on skink have been upgraded. It appears that there either
>> was a libc or valgrind change that made valgrind not recognize that a
>> pointer of 0 might not point anywhere :(
> ==5718== Invalid write of si
On 2018-01-04 11:20:33 -0800, Andres Freund wrote:
> On 2018-01-04 12:11:37 -0500, Tom Lane wrote:
> > Robert Haas writes:
> > > On Thu, Jan 4, 2018 at 11:00 AM, Tom Lane wrote:
> > >> Also, what the devil is happening on skink?
> >
> > > So, skink is apparently dying during shutdown of a user-c
On 2018-01-04 12:11:37 -0500, Tom Lane wrote:
> Robert Haas writes:
> > On Thu, Jan 4, 2018 at 11:00 AM, Tom Lane wrote:
> >> Also, what the devil is happening on skink?
>
> > So, skink is apparently dying during shutdown of a user-connected
> > backend, and specifically the one that executed th
Robert Haas writes:
> On Thu, Jan 4, 2018 at 11:00 AM, Tom Lane wrote:
>> Also, what the devil is happening on skink?
> So, skink is apparently dying during shutdown of a user-connected
> backend, and specifically the one that executed the 'tablespace' test.
Well, yeah, valgrind is burping: the
On Thu, Jan 4, 2018 at 11:00 AM, Tom Lane wrote:
> Also, what the devil is happening on skink?
I looked at the server log for the first of the two skink failures.
The key lines seem to be:
2018-01-04 07:45:36.764 UTC [5a4ddb98.5a97:154] LOG: statement: DROP
SCHEMA testschema CASCADE;
2018-01-04
27 matches
Mail list logo