Ok, I can reproduce the issue with that. The index is only 4MB in size
when I populate it with random data (vs. 15 MB with your data). The
command I used is:

INSERT INTO cubtest SELECT cube(random(), random()) FROM
generate_series(1,20000);

My guess is that the picksplit algorithm performs poorly with that data.
Unfortunately, I have no idea how to improve that.

One of idea is add sorting of Datums to be splitted by cost of insertion. It's implemented in intarray/tsearch GiST indexes.

Although I'm not sure that it will help but our researches on Guttman's picksplit algorimth show significant improvements.
--
Teodor Sigaev                                   E-mail: teo...@sigaev.ru
                                                   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to