Tom Lane <t...@sss.pgh.pa.us> writes:

> "Daniel Verite" <dan...@manitou-mail.org> writes:
>> These 2 tests need to allocate big chunks of contiguous memory, so they
>> might fail for lack of memory on tiny machines, and even when not failing,
>> they're pretty slow to run. Are they worth the trouble?
>
> Yeah, I'd noticed those on previous readings of the patch.  They'd almost
> certainly fail on some of our older/smaller buildfarm members, so they're
> not getting committed, even if they didn't require multiple seconds apiece
> to run (even on a machine with plenty of memory).  It's useful to have
> them for initial testing though.

Perl's test suite has a similar issue with tests for handling of huge
strings, hashes, arrays, regexes etc.  We've taken the approach of
checking the environment variable PERL_TEST_MEMORY and skipping tests
that need more than that many gigabytes.  We currently have tests that
check for values from 1 all the way up to 96 GiB.

This would be trivial to do in the Postgres TAP tests, but something
similar might feasible in the pg_regress too?

> It'd be great if there was a way to test get_bit/set_bit on large
> indexes without materializing a couple of multi-hundred-MB objects.
> Can't think of one offhand though.

For this usecase it might make sense to express the limit in megabytes,
and have a policy for how much memory tests can assume without explicit
opt-in from the developer or buildfarm animal.

- ilmari
-- 
"The surreality of the universe tends towards a maximum" -- Skud's Law
"Never formulate a law or axiom that you're not prepared to live with
 the consequences of."                              -- Skud's Meta-Law


Reply via email to