Hi,

trying to find how to store a large amount (>10000 rows/sec) of rows in a table 
that 
has indexes on "random values" columns, I found:

http://en.wikipedia.org/wiki/TokuDB


Basically, instead of using btrees (which kill insert performance for random 
values
on large tables) they use a different type of index, which they call "fractal".

If what they claim is true, insert performance in those cases (as I said, 
indexes on
columns with highly random data) is much faster (x80 times faster!!!)

I read some of the papers at:
http://supertech.csail.mit.edu/cacheObliviousBTree.html



I think it's a very interesting approach... instead of relying on disks random 
access
times, they use sequential access...

I was wondering:

1) has anyone looked at the papers?
2) I don't understand how they made it concurrent...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to