On 10/6/2014 3:02 AM, Richard Frith-Macdonald wrote:
I'm wondering if anyone can help with advice on how to manage large lists/sets
of items in a postgresql database.
I have a database which uses multiple lists of items roughly like this:
CREATE TABLE List (
ID SERIAL,
Name VARCHAR ....
);
and a table containing individual entries in the lists:
CREATE TABLE ListEntry (
ListID INT, /* Reference the List table */
ItemID INT /* References an Item table */
) ;
CREATE UNIQUE INDEX ListEntryIDX ON ListEntry(ListID, ItemID);
Now, there are thousands of lists, many with millions of entries, and items are
added to and removed from lists in an unpredictable way (in response to our
customer's actions, not under our control). Lists are also created by customer
actions.
Finding whether a particular item is in a particular list is reasonably fast,
but when we need to do things like find all the items in list A but not list B
things can get very slow (particularly when both lists contain millions of
common items).
I think that server won't use index-only scans because, even in cases where a
particular list has not had any recent changes, the ListEntry table will almost
always have had some change (for one of the other lists) since its last vacuum.
Perhaps creating multiple ListEntry tables (one for each list) would allow
better performance; but that would be thousands (possibly tens of thousands) of
tables, and allowing new tables to be created by our clients might conflict
with things like nightly backups.
Is there a better way to manage list/set membership for many thousands of sets
and many millions of items?
I seem to recall something about NOT IN() and nulls, but I dont recall
the details.
are you using:
select * where exists(select ...) and not exists(select ..)
or
select * where id in (select...) and id not in (select ...)
-Andy
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general