I wrote: > Also, I tried running the new random.sql regression cases over > and over, and found that the "not all duplicates" test fails about > one time in 100000 or so. We could probably tolerate that given > that the random test is marked "ignore" in parallel_schedule, but > I thought it best to add one more iteration so we could knock the > odds down.
Hmm ... it occurred to me to try the same check on the existing random() tests (attached), and darn if they don't fail even more often, usually within 50K iterations. So maybe we should rethink that whole thing. regards, tom lane
\timing on create table if not exists random_tbl (random bigint); do $$ begin for i in 1..1000000 loop TRUNCATE random_tbl; INSERT INTO RANDOM_TBL (random) SELECT count(*) AS random FROM onek WHERE random() < 1.0/10; -- select again, the count should be different INSERT INTO RANDOM_TBL (random) SELECT count(*) FROM onek WHERE random() < 1.0/10; -- select again, the count should be different INSERT INTO RANDOM_TBL (random) SELECT count(*) FROM onek WHERE random() < 1.0/10; -- select again, the count should be different INSERT INTO RANDOM_TBL (random) SELECT count(*) FROM onek WHERE random() < 1.0/10; -- now test that they are different counts if (select true FROM RANDOM_TBL GROUP BY random HAVING count(random) > 3) then raise notice 'duplicates at iteration %', i; exit; end if; -- approximately check expected distribution if (select true FROM RANDOM_TBL HAVING AVG(random) NOT BETWEEN 80 AND 120) then raise notice 'range at iteration %', i; exit; end if; end loop; end $$;