> > create index prdt_new_url_dx on prdt_new (url)
> > create index prdt_new_sku_dx on prdt_new (sku)
> > create index prdt_old_sku_dx on prdt_old (sku)
> > create index prdt_new_url_null_dx on prdt_new (url) where prdt_new.url
> > IS NULL
I added indexes & redo the analyze - Query plan looks bett
On Mon, 2005-03-28 at 16:02, Scott Marlowe wrote:
> On Mon, 2005-03-28 at 15:38, Yudie Pg wrote:
> > > Also, this is important, have you anayzed the table? I'm guessing no,
> > > since the estimates are 1,000 rows, but the has join is getting a little
> > > bit more than that. :)
> > >
> > > Ana
On Mon, 2005-03-28 at 15:38, Yudie Pg wrote:
> > Also, this is important, have you anayzed the table? I'm guessing no,
> > since the estimates are 1,000 rows, but the has join is getting a little
> > bit more than that. :)
> >
> > Analyze your database and then run the query again.
>
> I analyz
> Also, this is important, have you anayzed the table? I'm guessing no,
> since the estimates are 1,000 rows, but the has join is getting a little
> bit more than that. :)
>
> Analyze your database and then run the query again.
I analyze the table and it decrease number of rows in nested loop o
Looks like you need to create some indexes, probably on (groupnum) and
possibly on (groupnum,sku) on both tables.
Hope this helps,
On Mon, Mar 28, 2005 at 01:50:06PM -0600, Yudie Gunawan wrote:
> > Hold on, let's diagnose the real problem before we look for solutions.
> > What does explain tell
On Mon, 2005-03-28 at 13:50, Yudie Gunawan wrote:
> > Hold on, let's diagnose the real problem before we look for solutions.
> > What does explain tell you? Have you analyzed the database?
>
>
> This is the QUERY PLAN
> Hash Left Join (cost=25.00..412868.31 rows=4979686 width=17)
> Hash Cond
> Hold on, let's diagnose the real problem before we look for solutions.
> What does explain tell you? Have you analyzed the database?
This is the QUERY PLAN
Hash Left Join (cost=25.00..412868.31 rows=4979686 width=17)
Hash Cond: (("outer".groupnum = "inner".groupnum) AND
(("outer".sku)::tex
On Mon, 2005-03-28 at 13:02, Yudie Gunawan wrote:
> I actualy need to join from 2 tables. Both of them similar and has
> more than 4 millions records.
>
> CREATE TABLE prdt_old (
> groupnum int4 NOT NULL,
> sku varchar(30) NOT NULL,
> url varchar(150),
> );
>
> CREATE TABLE prdt_new(
> groupn
I actualy need to join from 2 tables. Both of them similar and has
more than 4 millions records.
CREATE TABLE prdt_old (
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150),
);
CREATE TABLE prdt_new(
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150) NOT NULL,
On Mon, 2005-03-28 at 11:32 -0600, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
> Does postgres has feature like table partition to handle table with
> very large records.
> Just wondering what do you guys do t
On Mon, Mar 28, 2005 at 11:32:04AM -0600, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
What's the query and how are you issuing it? Where are you seeing
the error? This could be a client problem: the client
On Mon, 2005-03-28 at 11:32, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
> Does postgres has feature like table partition to handle table with
> very large records.
> Just wondering what do you guys do to deal
I have table with more than 4 millions records and when I do select
query it gives me "out of memory" error.
Does postgres has feature like table partition to handle table with
very large records.
Just wondering what do you guys do to deal with very large table?
Thanks!
-
13 matches
Mail list logo