I have table with more than 4 millions records and when I do select
query it gives me "out of memory" error.
Does postgres has feature like table partition to handle table with
very large records.
Just wondering what do you guys do to deal with very large table?
Thanks!
-
I actualy need to join from 2 tables. Both of them similar and has
more than 4 millions records.
CREATE TABLE prdt_old (
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150),
);
CREATE TABLE prdt_new(
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150) NOT NULL,
> Hold on, let's diagnose the real problem before we look for solutions.
> What does explain tell you? Have you analyzed the database?
This is the QUERY PLAN
Hash Left Join (cost=25.00..412868.31 rows=4979686 width=17)
Hash Cond: (("outer".groupnum = "inner".groupnum) AND
(("outer".sku)::tex