Hi!
After doing a quick test:
with sequential values:
create table t01 (id bigint);
create index i01 on t01(id);
insert into t01 SELECT s from generate_series(1,10000000) as s;

and random values:
create table t02 (id bigint);
create index i02 on t02(id);
insert into t02 SELECT random()*100 from generate_series(1,10000000) as s;

The page counts for tables remain the same:
 relpages |         relname          
----------+--------------------------
    44248 | t01
    44248 | t02

But for indexes are different:
 relpages |             relname             
----------+---------------------------------
    27421 | i01
    34745 | i02

Plus, postgres does 5 times more writes to disk with random data. 
What's the reason that postgres needs more index pages to store random data
than sequential ones?



--
View this message in context: 
http://postgresql.nabble.com/Sequential-vs-random-values-number-of-pages-in-B-tree-tp5916956.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to