On Sun, 2009-07-05 at 17:28 -0700, Jeff Davis wrote: > This is a follow up to my old proposal here: > > http://archives.postgresql.org/pgsql-hackers/2008-06/msg00404.php >
> Any input is appreciated (design problems, implementation, language > ideas, or anything else). I'd like to get it into shape for the July > 15 commitfest if no major problems are found. I was concerned that your definition of concurrently inserted didn't seem to match the size of the shared memory array required. How will you cope with a large COPY? Surely there can be more than one concurrent insert from any backend? It would be useful to see a real example of what this can be used for. I think it will be useful to separate the concepts of a constraint from the concept of an index. It seems possible to have a UNIQUE constraint that doesn't help at all in locating rows, just in proving that the rows are unique. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers