On 07/03/2013 02:50 PM, Josh Berkus wrote:
On 07/03/2013 07:43 AM, Robert Haas wrote:
Let's have a new schedule called minute-check with the objective to run the
common tests in 60 secs.
Note that we're below 60s even with assert and CLOBBER_CACHE_ALWAYS, at
least on my laptop.
I find that
On 07/03/2013 07:43 AM, Robert Haas wrote:
> Let's have a new schedule called minute-check with the objective to run the
>> common tests in 60 secs.
Note that we're below 60s even with assert and CLOBBER_CACHE_ALWAYS, at
least on my laptop.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts
On 3 July 2013 15:43, Robert Haas wrote:
>
> > Let's have a new schedule called minute-check with the objective to run
> the
> > common tests in 60 secs.
> >
> > We can continue to expand the normal schedules from here.
> >
> > Anybody that wants short tests can run that, everyone else can run the
On Wed, Jul 3, 2013 at 2:28 AM, Simon Riggs wrote:
>> It's sad to simply reject meaningful automated tests on the basis of doubt
>> that they're important enough to belong in every human-in-the-loop test
>> run.
>> I share the broader vision for automated testing represented by these
>> patches.
>
On 2 July 2013 18:43, Noah Misch wrote:
> On Tue, Jul 02, 2013 at 10:17:08AM -0400, Robert Haas wrote:
> > So I think the first question we need to answer is: Should we just
> > reject Robins' patches en masse? If we do that, then the rest of this
> > is moot. If we don't do that, then the seco
On Tue, Jul 02, 2013 at 10:17:08AM -0400, Robert Haas wrote:
> So I think the first question we need to answer is: Should we just
> reject Robins' patches en masse? If we do that, then the rest of this
> is moot. If we don't do that, then the second question is whether we
> should try to introduc
What is more, it's entirely possibly to invoke pg_regress with multiple
--schedule arguments, so we could, for example, have a makefile target
that would run both the check and some other schedule of longer running
tests.
I missed this fact, because I've not seen any example of multiple sche
On 07/02/2013 10:17 AM, Robert Haas wrote:
Reviewing this thread, I think that the following people are in favor
of adding the tests to the existing schedule: Josh Berkus, Stephen
Frost, Fabien Coelho, Dann Corbit, and Jeff Janes. And I think that
the following people are in favor of a new sche
Reviewing this thread, I think that the following people are in favor
of adding the tests to the existing schedule: Josh Berkus, Stephen
Frost, Fabien Coelho, Dann Corbit, and Jeff Janes. And I think that
the following people are in favor of a new schedule: Andres Freund,
Andrew Dunstan, David Fet
On 2013-07-01 07:14:23 -0700, David Fetter wrote:
> > If we had a different set of tests, that would be a valid argument. But
> > we don't, so it's not. And nobody has offered to write a feature to
> > split our tests either.
> With utmost respect, this just isn't true. There is a "make coverag
On Sat, Jun 29, 2013 at 02:59:35PM -0700, Josh Berkus wrote:
> On 06/29/2013 02:14 PM, Andrew Dunstan wrote:
> > AIUI: They do test feature use and errors that have cropped up in the
> > past that we need to beware of. They don't test every bug we've ever
> > had, nor do they exercise every piece o
On Monday, July 01, 2013 8:37 AM Josh Berkus wrote:
On 06/30/2013 12:33 AM, Amit kapila wrote:
>
> On Sunday, June 30, 2013 11:37 AM Fabien COELHO wrote:
If we had a different set of tests, that would be a valid argument. But
we don't, so it's not. And nobody has offered to write a feat
On 06/30/2013 12:33 AM, Amit kapila wrote:
>
> On Sunday, June 30, 2013 11:37 AM Fabien COELHO wrote:
>>> If we had a different set of tests, that would be a valid argument. But
>>> we don't, so it's not. And nobody has offered to write a feature to
>>> split our tests either.
>
>> I have done
On Sat, Jun 29, 2013 at 3:43 PM, Andrew Dunstan wrote:
>
> On 06/29/2013 05:59 PM, Josh Berkus wrote:
>
> Maybe there is a good case for these last two in a different set of tests.
>>>
>> If we had a different set of tests, that would be a valid argument. But
>> we don't, so it's not. And nobo
On 30 June 2013 02:33, Amit kapila wrote:
>
> On Sunday, June 30, 2013 11:37 AM Fabien COELHO wrote:
> >> If we had a different set of tests, that would be a valid argument. But
> >> we don't, so it's not. And nobody has offered to write a feature to
> >> split our tests either.
>
> >I have don
https://commitfest.postgresql.org/action/patch_view?id=1170
I think it is better to submit for next commit fest which is at below link:
https://commitfest.postgresql.org/action/commitfest_view?id=19
I put it there as the discussion whether to accept or not Robins patches
because of their p
On Sunday, June 30, 2013 11:37 AM Fabien COELHO wrote:
>> If we had a different set of tests, that would be a valid argument. But
>> we don't, so it's not. And nobody has offered to write a feature to
>> split our tests either.
>I have done a POC. See:
> https://commitfest.postgresql.org/actio
If we had a different set of tests, that would be a valid argument. But
we don't, so it's not. And nobody has offered to write a feature to
split our tests either.
I have done a POC. See:
https://commitfest.postgresql.org/action/patch_view?id=1170
What I have not done is to decide how to s
Josh,
* Josh Berkus (j...@agliodbs.com) wrote:
> If we don't have a test for it, then we can break it in the future and
> not know we've broken it until .0 is released. Is that really a
> direction we're happy going in?
To be fair, AIUI anyway, certain companies have much larger regression
suite
On Sat, Jun 29, 2013 at 7:58 PM, Josh Berkus wrote:
>
>>
>> Dividing the tests into different sections is as simple as creating one
>> schedule file per section.
>
> Oh? Huh. I'd thought it would be much more complicated. Well, by all
> means, let's do it then.
I think I should point out, sinc
>
> Dividing the tests into different sections is as simple as creating one
> schedule file per section.
Oh? Huh. I'd thought it would be much more complicated. Well, by all
means, let's do it then.
I'm not personally convinced that the existing regression tests all
belong in the "default" s
On 06/29/2013 05:59 PM, Josh Berkus wrote:
Maybe there is a good case for these last two in a different set of tests.
If we had a different set of tests, that would be a valid argument. But
we don't, so it's not. And nobody has offered to write a feature to
split our tests either.
I have to
-Original Message-
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Josh Berkus
Sent: Saturday, June 29, 2013 3:00 PM
To: Andrew Dunstan
Cc: Alvaro Herrera; pgsql-hackers@postgresql.org; Robins Tharakan
Subject: Re: [HACKERS] New regression
On 06/29/2013 02:14 PM, Andrew Dunstan wrote:
> AIUI: They do test feature use and errors that have cropped up in the
> past that we need to beware of. They don't test every bug we've ever
> had, nor do they exercise every piece of code.
If we don't have a test for it, then we can break it in the
On 06/29/2013 03:57 PM, Josh Berkus wrote:
I see two problems with this report:
1. it creates a new installation for each run,
Yes, I'm running "make check"
2. it only uses the serial schedule.
Um, no:
parallel group (19 tests): limit prepare copy2 plancache xml returning
conversion rowtyp
> I see two problems with this report:
> 1. it creates a new installation for each run,
Yes, I'm running "make check"
> 2. it only uses the serial schedule.
Um, no:
parallel group (19 tests): limit prepare copy2 plancache xml returning
conversion rowtypes largeobject temp truncate polymorphis
On Jun 29, 2013, at 12:36 AM, Alvaro Herrera wrote:
> I see two problems with this report:
> 1. it creates a new installation for each run,
But that's the normal way of running the tests anyway, isn't it?
> 2. it only uses the serial schedule.
make check uses the parallel schedule - did Josh in
Josh Berkus escribió:
> Hackers,
>
> Per discussion on these tests, I ran "make check" against 9.4 head,
> applied all of the regression tests other than DISCARD.
>
> Time for 3 "make check" runs without new tests: 65.9s
>
> Time for 3 "make check runs with new tests: 71.7s
>
> So that's an inc
How did you evaluate that coverage increased "greatly"? I am not
generally against these tests but I'd be surprised if the overall test
coverage improved noticeably by this. Which makes 10% runtime overhead
pretty hefty if the goal is to actually achieve a high coverage.
I was relying on Robin
* Josh Berkus (j...@agliodbs.com) wrote:
> So that's an increase of about 10% in test runtime (or 2 seconds per run
> on my laptop), in order to greatly improve regression test coverage.
> I'd say that splitting the tests is not warranted, and that we should go
> ahead with these tests on their tes
On 2013-06-28 14:46:10 -0700, Josh Berkus wrote:
>
> > How did you evaluate that coverage increased "greatly"? I am not
> > generally against these tests but I'd be surprised if the overall test
> > coverage improved noticeably by this. Which makes 10% runtime overhead
> > pretty hefty if the goal
> How did you evaluate that coverage increased "greatly"? I am not
> generally against these tests but I'd be surprised if the overall test
> coverage improved noticeably by this. Which makes 10% runtime overhead
> pretty hefty if the goal is to actually achieve a high coverage.
I was relying on
On 2013-06-28 14:01:23 -0700, Josh Berkus wrote:
> Per discussion on these tests, I ran "make check" against 9.4 head,
> applied all of the regression tests other than DISCARD.
>
> Time for 3 "make check" runs without new tests: 65.9s
>
> Time for 3 "make check runs with new tests: 71.7s
>
> So
Hackers,
Per discussion on these tests, I ran "make check" against 9.4 head,
applied all of the regression tests other than DISCARD.
Time for 3 "make check" runs without new tests: 65.9s
Time for 3 "make check runs with new tests: 71.7s
So that's an increase of about 10% in test runtime (or 2 s
34 matches
Mail list logo