On 2025-Mar-28, Ashutosh Bapat wrote:

> However, it's a very painful process to come up with the schedule and
> more painful and error prone to maintain it. It could take many days
> to come up with the right schedule which can become inaccurate the
> moment next SQL file is added OR an existing file is modified to
> add/drop "interesting" objects.

Hmm, I didn't mean that we'd maintain a separate schedule.  I meant that
we'd take the existing schedule, then apply some Perl magic to it that
grep-outs the tests that we know to contribute nothing, and generate a
new schedule file dynamically.  We don't need to maintain a separate
schedule file.

You're right that if an existing uninteresting test is modified to
create interesting objects, we'd lose coverage of those objects.  That
seems a much smaller problem to me.  So it's just a matter of doing some
Perl map/grep to generate a new schedule file using the attached
exclusion file.


(For what it's worth, what I did to try to determine which tests to
include, rather than scan each file manually, is to run pg_regress with
"test_setup thetest tablespace", then dump the regression database, and
see if anything is there that's not in the dump when I just with just
"test_setup tablespace".  I didn't carry the experiment to completion
though.)


For the future, we could annotate each test as you said, either by
adding a marker on the test file itself, or by adding something next to
its name in the schedule file, so the schedule file could look like:

test: plancache(dump_ignore) limit(stream_ignore) plpgsql copy2
        temp(stream_ignore,dump_ignore) domain rangefuncs(stream_ignore)
        prepare conversion truncate alter_table
        sequence polymorphism rowtypes returning largeobject with xml

... and so on.

-- 
Álvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/
# This file lists tests to skip on pg_upgrade/t/002_pg_upgrade.pl
advisory_lock
amutils
async
bitmapops
char
combocid
comments
copy
copy2
copydml
copyencoding
copyselect
database
dbsize
delete
drop_if_exists
drop_operator
equivclass
errors
explain
expressions
functional_deps
groupingsets
guc
hash_func
horology
incremental_sort
infinite_recurse
join_hash
json_encoding
jsonb_jsonpath
jsonpath
jsonpath_encoding
lock
md5
memoize
merge
misc_sanity
mvcc
oidjoins
opr_sanity
partition_aggregate
partition_join
partition_prune
plancache
portals
portals_p2
predicate
prepare
prepared_xacts
psql
psql_crosstab
psql_pipeline
regex
regproc
returning
sanity_check
select
select_distinct
select_distinct_on
select_having
select_implicit
select_parallel
stats_import
strings
subselect
sysviews
tablesample
temp
text
tid
tidrangescan
tidscan
transactions
truncate
tsrf
tstypes
txid
type_sanity
unicode
union
update
vacuum
vacuum_parallel
window
xmlmap

Reply via email to