On Sep 7, 2012, at 11:15 AM, Marti Raudsepp wrote:
> There's a pg_bulkload extension which does much faster incremental
> index updates for large bulk data imports, so you get best of both
> worlds: http://pgbulkload.projects.postgresql.org/
Thanks, I'll have to check that out. This is going t
On Thu, Sep 6, 2012 at 5:12 PM, Alan Hodgson wrote:
> On Thursday, September 06, 2012 05:06:27 PM Jeff Janes wrote:
>> For updating 20 million out of 500 million rows, wouldn't a full table
>> scan generally be preferable to an index scan anyway?
>>
>
> Not one table scan for each row updated ...
On Fri, Sep 7, 2012 at 12:22 AM, Aram Fingal wrote:
> Should I write a script which drops all the indexes, copies the data and then
> recreates the indexes or is there a better way to do this?
There's a pg_bulkload extension which does much faster incremental
index updates for large bulk data im
On Thu, Sep 6, 2012 at 4:22 PM, Aram Fingal wrote:
> I have a table which currently has about 500 million rows. For the most
> part, the situation is going to be that I will import a few hundred million
> more rows from text files once every few months but otherwise there won't be
> any insert
Jeff Janes writes:
>> That sounds like you lack an index on the referencing column of the
>> foreign key constraint. Postgres doesn't require you to keep such
>> an index, but it's a really good idea if you ever update the referenced
>> column.
> For updating 20 million out of 500 million rows,
On Thursday, September 06, 2012 05:06:27 PM Jeff Janes wrote:
> For updating 20 million out of 500 million rows, wouldn't a full table
> scan generally be preferable to an index scan anyway?
>
Not one table scan for each row updated ...
--
Sent via pgsql-general mailing list (pgsql-general@po
>
>> There are also rare cases where I might want to make a correction. For
>> example, one of the columns is sample name which is a foreign key to a
>> samples table defined with " ON UPDATE CASCADE." I decided to change a
>> sample name in the samples table which should affect about 20 milli
On Sep 6, 2012, at 5:54 PM, Tom Lane wrote:
> That sounds like you lack an index on the referencing column of the
> foreign key constraint. Postgres doesn't require you to keep such
> an index, but it's a really good idea if you ever update the referenced
> column.
Thanks. You're right. That
Aram Fingal writes:
> I have a table which currently has about 500 million rows. For the most
> part, the situation is going to be that I will import a few hundred million
> more rows from text files once every few months but otherwise there won't be
> any insert, update or delete queries. I
I have a table which currently has about 500 million rows. For the most part,
the situation is going to be that I will import a few hundred million more rows
from text files once every few months but otherwise there won't be any insert,
update or delete queries. I have created five indexes, so
10 matches
Mail list logo