Thank you very much for the insights (and how to read correctly a explain
:)).
There is a difference in data between the 2 databases, crons where running
on one and not the other.
But your insights enabled us to dive deeper, thanks !
2014-11-05 18:30 GMT+01:00 Tom Lane :
> "Tomas Vondra" writes
"Tomas Vondra" writes:
> Dne 5 Listopad 2014, 18:10, Tom Lane napsal(a):
>> For some reason there's over 1000 times more rows in andalertsmatch in
>> the 9.1.14 installation. I'm betting on a foulup somewhere in the data
>> dump/restore process.
> I'd bet that's mostly due to rescans in the merg
Dne 5 Listopad 2014, 18:10, Tom Lane napsal(a):
> "Tomas Vondra" writes:
>> Dne 5 Listopad 2014, 17:31, R??my-Christophe Schermesser napsal(a):
>>> We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
>>> data, schema, PG configuration, and are almost identical machines, same
>>>
"Tomas Vondra" writes:
> Dne 5 Listopad 2014, 17:31, Rémy-Christophe Schermesser napsal(a):
>> We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
>> data, schema, PG configuration, and are almost identical machines, same
>> number of cores and memory, but different cloud provi
Hi,
Dne 5 Listopad 2014, 17:31, Rémy-Christophe Schermesser napsal(a):
> Hi,
>
> We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
> data, schema, PG configuration, and are almost identical machines, same
> number of cores and memory, but different cloud provider. The data was
Hi,
We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
data, schema, PG configuration, and are almost identical machines, same
number of cores and memory, but different cloud provider. The data was
transferred with a pg_dump/pg_restore. We ran VACUUM ANALYSE, ANALYSE, and
REIND
On 28 October 2011 13:37, Alban Hertroys wrote:
> On 28 October 2011 09:02, Mohamed Hashim wrote:
Please don't cross-post to mailing lists for multiple projects.
--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.
--
Sent via pgsql-general mailing l
On 28 October 2011 09:02, Mohamed Hashim wrote:
> EXPLAIN select * from stk_source ;
> QUERY
> PLAN
> -
> Result (cost=0.00..6575755.39 rows=163132513 width=42)
> -> Appen
I have Quadcore server with 8GB RAM
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5607 @ 2.27GHz
stepping: 2
cpu MHz : 1197.000
cache size : 8192 KB
MemTotal:8148636 kB
MemFree: 4989
> > testinsert contains t values between '2009-08-01' and '2009-08-09', and
> > ne_id
> from 1 to 2. But only 800 out of 2 ne_id have to be read; there's no
> need for a table scan!
> > I guess this is a reflection of the poor "correlation" on ne_id; but, as I
> said, I don't really t
m_li...@yahoo.it wrote:
> testinsert contains t values between '2009-08-01' and '2009-08-09', and ne_id
> from 1 to 2. But only 800 out of 2 ne_id have to be read; there's no
> need for a table scan!
> I guess this is a reflection of the poor "correlation" on ne_id; but, as I
> said, I
Since noone replied to
http://www.mail-archive.com/pgsql-general@postgresql.org/msg133360.html, I
tried another approach:
I can't cluster the whole table every day; it would take too much (as I said,
table as 60M rows, and I have hundreds of them).
Plus, it wouldn't really make much sense: the
> But that would be a different query -- there's no
> restrictions on the
> t values in this one.
There is a restriction on the t values:
select * from idtable left outer join testinsert on id=ne_id where groupname='a
group name' and time between $a_date and $another_date
> Have you tried som
On Mon, Jul 6, 2009 at 6:32 PM, Scara Maccai wrote:
> The "best" way to read the table would still be a nested loop, but a loop on
> the
> "t" values, not on the ne_id values, since data for the same timestamp is
> "close".
But that would be a different query -- there's no restrictions on the
I have a problem with the method that PG uses to access my data.
Data into testinsert is inserted every 15 minutes.
ne_id varies from 1 to 2.
CREATE TABLE testinsert
(
ne_id integer NOT NULL,
t timestamp without time zone NOT NULL,
v integer[],
CONSTRAINT testinsert_pk PRIMARY KEY
Hi All,
We're migrating to new hardware and to pgsql 8.3.1 from pgsql 8.2.4. We
were scheduled to go live yesterday morning but elected not to late Friday
after observing this issue:
Our new hardware includes an external Dell MD3000 RAID array of 15 15k SAS
disks. We have a 2 disk RAID1 array fo
If possible can you send the data dump of these tables.
usuarios_temp , usuarios_indice ?
Thanks,
Gokul.
On Jan 10, 2008 4:00 PM, Clodoaldo <[EMAIL PROTECTED]> wrote:
> 2008/1/8, Clodoaldo <[EMAIL PROTECTED]>:
> > 2008/1/8, Alvaro Herrera <[EMAIL PROTECTED]>:
> > > Clodoaldo escribió:
> > >
> >
2008/1/8, Clodoaldo <[EMAIL PROTECTED]>:
> 2008/1/8, Alvaro Herrera <[EMAIL PROTECTED]>:
> > Clodoaldo escribió:
> >
> > 8.2:
> >
> > > Trigger for constraint datas: time=14231.240 calls=880691
> > > Total runtime: 356862.302 ms
> > > (12 rows)
> > >
> > > Time: 357750.531 ms
> >
> > 8.3:
> >
> >
2008/1/8, Alvaro Herrera <[EMAIL PROTECTED]>:
> Clodoaldo escribió:
>
> 8.2:
>
> > Trigger for constraint datas: time=14231.240 calls=880691
> > Total runtime: 356862.302 ms
> > (12 rows)
> >
> > Time: 357750.531 ms
>
> 8.3:
>
> > Trigger for constraint datas: time=33179.197 calls=877895
> > To
Clodoaldo escribió:
8.2:
> Trigger for constraint datas: time=14231.240 calls=880691
> Total runtime: 356862.302 ms
> (12 rows)
>
> Time: 357750.531 ms
8.3:
> Trigger for constraint datas: time=33179.197 calls=877895
> Total runtime: 9546878.520 ms
> (13 rows)
So what's this constraint an
2008/1/7, Greg Smith <[EMAIL PROTECTED]>:
> On Mon, 7 Jan 2008, Clodoaldo wrote:
>
> > I just did it. Built and installed 8.2.5. Copied the postgresql.conf
> > from the production. Issued an analyze and ran the insert query twice:
> > The second time it ran in 403 sec, half the production time.
>
>
Greg Smith <[EMAIL PROTECTED]> writes:
> The only thing I noticed in your original explain plans that was different
> between the two was:
> 8.2:
> Merge Cond: ((ut.n_time = ui.n_time) AND
> ((ut.usuario)::text = "inner"."?column4?"))
> 8.3:
> Merge Cond: ((ut.n_tim
On Mon, 7 Jan 2008, Clodoaldo wrote:
I just did it. Built and installed 8.2.5. Copied the postgresql.conf
from the production. Issued an analyze and ran the insert query twice:
The second time it ran in 403 sec, half the production time.
OK, you're getting close now. What you should do now is
2008/1/7, Scott Marlowe <[EMAIL PROTECTED]>:
> On Jan 6, 2008 1:46 PM, Clodoaldo <[EMAIL PROTECTED]> wrote:
> > 2008/1/6, Scott Marlowe <[EMAIL PROTECTED]>:
> >
> > > On Jan 6, 2008 5:06 AM, Clodoaldo <[EMAIL PROTECTED]> wrote:
> > > >
> > > > Then I rebuilt and reinstalled postgresql with the xlog
On Jan 6, 2008 1:46 PM, Clodoaldo <[EMAIL PROTECTED]> wrote:
> 2008/1/6, Scott Marlowe <[EMAIL PROTECTED]>:
>
> > On Jan 6, 2008 5:06 AM, Clodoaldo <[EMAIL PROTECTED]> wrote:
> > >
> > > Then I rebuilt and reinstalled postgresql with the xlog_seg_size set
> > > to the default 16MB and did initdb. N
2008/1/6, Scott Marlowe <[EMAIL PROTECTED]>:
> On Jan 6, 2008 5:06 AM, Clodoaldo <[EMAIL PROTECTED]> wrote:
> >
> > Then I rebuilt and reinstalled postgresql with the xlog_seg_size set
> > to the default 16MB and did initdb. Now the time is 7,642 sec.
> >
> > I'm lost. It looks like 1GB xlog_seg_si
On Jan 6, 2008 5:06 AM, Clodoaldo <[EMAIL PROTECTED]> wrote:
>
> Then I rebuilt and reinstalled postgresql with the xlog_seg_size set
> to the default 16MB and did initdb. Now the time is 7,642 sec.
>
> I'm lost. It looks like 1GB xlog_seg_size is indeed faster than 16MB
> but again it is slower th
2008/1/5, Tom Lane <[EMAIL PROTECTED]>:
> Clodoaldo <[EMAIL PROTECTED]> writes:
> >> How did you get 8.3-beta4?
>
> > I built from the source rpm, which i installed in my machine. There is
> > something I forgot to mention. I created a patch to change
> > XLOG_SEG_SIZE and built with it:
> > -#defi
On Jan 5, 2008 1:15 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Sat, 5 Jan 2008, Clodoaldo wrote:
>
> > I created a patch to change
> > XLOG_SEG_SIZE and built with it:
> > -#define XLOG_SEG_SIZE (16*1024*1024)
> > +#define XLOG_SEG_SIZE (1024*1024*1024)
>
> People sometimes get a small improv
On Sat, 5 Jan 2008, Clodoaldo wrote:
I created a patch to change
XLOG_SEG_SIZE and built with it:
-#define XLOG_SEG_SIZE (16*1024*1024)
+#define XLOG_SEG_SIZE (1024*1024*1024)
People sometimes get a small improvement setting this to 2-4 times larger
than the default when running a large ser
Clodoaldo <[EMAIL PROTECTED]> writes:
>> How did you get 8.3-beta4?
> I built from the source rpm, which i installed in my machine. There is
> something I forgot to mention. I created a patch to change
> XLOG_SEG_SIZE and built with it:
> -#define XLOG_SEG_SIZE (16*1024*1024)
> +#define XLOG_SEG_
2008/1/5, Greg Smith <[EMAIL PROTECTED]>:
> On Fri, 4 Jan 2008, Clodoaldo wrote:
>
> > I built a new system and installed 8.3-beta4.
>
> How did you get 8.3-beta4?
I built from the source rpm, which i installed in my machine. There is
something I forgot to mention. I created a patch to change
XLOG
2008/1/4, Tom Lane <[EMAIL PROTECTED]>:
> Clodoaldo <[EMAIL PROTECTED]> writes:
> > The same insert query takes 20 minutes in the production system and 2
> > hours in the new one.
>
> Hmph. It's the same plan, so it's not a planner mistake. Could you
> post EXPLAIN ANALYZE rather than just EXPLAI
On Fri, 4 Jan 2008, Clodoaldo wrote:
I built a new system and installed 8.3-beta4.
How did you get 8.3-beta4? If you used the RPMs available from the
PostgreSQL FTP site, those are optimized for finding bugs at the expense
of speed. If that's where you got them from, try putting this line
Clodoaldo <[EMAIL PROTECTED]> writes:
> The same insert query takes 20 minutes in the production system and 2
> hours in the new one.
Hmph. It's the same plan, so it's not a planner mistake. Could you
post EXPLAIN ANALYZE rather than just EXPLAIN for both cases?
Also, it'd be worth watching "vms
I built a new system and installed 8.3-beta4. While i'm testing it i
noticed a big performance gap between the production system running
8.2.5 and the new one. The new one, in spite of being much better, is
much slower.
The same insert query takes 20 minutes in the production system and 2
hours in
"Victor Adolfsson" <[EMAIL PROTECTED]> writes:
> I'm having a problem with bad performance when retrieving many rows where
> each row has a 2200 long byte array (called template_compressed) and a 50
> character varying (called uniqueid)
2200 bytes is (just) long enough to trigger toasting of the e
Hi
I'm having a problem with bad performance when retrieving many rows where
each row has a 2200 long byte array (called template_compressed) and a 50
character varying (called uniqueid)
Is there a better datatype than bytea when it is important to fetch the
information from the database?
Would
Sergey Karin wrote:
> Hi all.
>
> I use PG 8.1.8 and PostGIS 1.1.1
> vka1=# select version();
> I develop application for loading geodata to database. In separate
> transaction the application inserts the data in separate table that
> created dynamically when transaction started. All tables has
:[EMAIL PROTECTED] On Behalf Of Sergey Karin
Sent: Saturday, June 09, 2007 1:48 AM
To: pgsql-general@postgresql.org; PostGIS Users Discussion
Subject: [GENERAL] performance problem with loading data
Hi all.
I use PG 8.1.8 and PostGIS 1.1.1
vka1=# select version
Hi all.
I use PG 8.1.8 and PostGIS 1.1.1
vka1=# select version();
version
-
PostgreSQL 8.1.8 on x86_64-pc-linux-gnu, compiled by GCC
I am new to postgres and I have 4 doubts. 1) I have a performance problem as I am trying to insert around 60 million rows to a table which is partitioned. So first I copied the .csv file which contains data, with COPY command to a temp table which was quick. It took only 15 to 20 minutes. Now I
> 3) I want to use materialized views, I don?t understand it from
> http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html, can
> anyone explain me
> with a simple example.
The following link helps to describe the idea behind each of the methods of
Materialize views.
http://ja
am Fri, dem 13.10.2006, um 1:55:06 -0700 mailte Uwe C. Schroeder folgendes:
> > Does the table you're inserting into have indexes or foreign keys?
> > Either of those slow down loading considerably. One commen workaround
> > is to drop the indexes and constraints, load the data and re-add them.
>
On Friday 13 October 2006 01:22, Martijn van Oosterhout wrote:
> > 1) I have a performance problem as I am trying to insert around 60
> > million rows to a table which is partitioned. So first I copied the
> > .csv file which contains data, with COPY command to a temp table
> > which was qu
On Thu, Oct 12, 2006 at 10:26:28PM -0700, roopa perumalraja wrote:
> I am new to postgres and I have 4 doubts.
>
> 1) I have a performance problem as I am trying to insert around 60
> million rows to a table which is partitioned. So first I copied the
> .csv file which contains data, with
I am new to postgres and I have 4 doubts. 1) I have a performance problem as I am trying to insert around 60 million rows to a table which is partitioned. So first I copied the .csv file which contains data, with COPY command to a temp table which was quick. It took only 15 to 20 minutes. Now I
On 19/07/2006, at 10:03 PM, Christian Rengstl wrote:
So here's the master table including the rules:
entry_no int8 NOT NULL DEFAULT nextval('public.master_seq'::regclass),
pid varchar(15) NOT NULL,
val_1 varchar(1),
val_2 varchar(1),
chr int2 NOT NULL,
aendat timestamp DEFAULT now(),
So here's the master table including the rules:
entry_no int8 NOT NULL DEFAULT nextval('public.master_seq'::regclass),
pid varchar(15) NOT NULL,
val_1 varchar(1),
val_2 varchar(1),
chr int2 NOT NULL,
aendat timestamp DEFAULT now(),
aennam varchar(8),
CONSTRAINT "PK_ENTRY" PRIMARY KE
On 19/07/2006, at 8:49 PM, Christian Rengstl wrote:
Obviously it had something to do with the rule, because now
everything finished within 20 minutes. the problem is just that i
don't really want to give up the inheritance design. is there a way
to maintain the inheritance that doesn't caus
Obviously it had something to do with the rule, because now everything finished
within 20 minutes. the problem is just that i don't really want to give up the
inheritance design. is there a way to maintain the inheritance that doesn't
cause this huge performance problem?
Q <[EMAIL PROTECTED]> w
Well, i'll try that, but honestly i don't think it's the rule as the rule is
really simple: it's just one simple integer comparison...
Q <[EMAIL PROTECTED]> wrote on 07/19/06 11:54 am:
> On 19/07/2006, at 6:32 PM, Christian Rengstl wrote:
>
>> The analyze is from the exact query and i dropped th
On 19/07/2006, at 6:32 PM, Christian Rengstl wrote:
The analyze is from the exact query and i dropped the indexes
before the insert as well without imrpvement. The target table is
as well completely empty and the insert is supposed to write, in
this case, more or less 8 million lines in the
The analyze is from the exact query and i dropped the indexes before the insert
as well without imrpvement. The target table is as well completely empty and
the insert is supposed to write, in this case, more or less 8 million lines in
the table. There is a rule though, because i have inheritanc
The thing is that in the text file there is a column that is something like
xyz_12 and in the table i just need the integer part of it that's what the
query is used for. The problem though is not really in the select part, because
running the select part on 8 million lines takes about 3 minutes,
On 19/07/2006, at 4:24 AM, Christian Rengstl wrote:
now finally after a long time i have the query plan for the whole
filled table. I hope somebody can explain me why it takes so much
longer...
These explain analyze results don't appear to be from the queries you
posted previously. For
On 7/18/06, Christian Rengstl
<[EMAIL PROTECTED]> wrote:
now finally after a long time i have the query plan for the whole filled table.
I hope somebody can explain me why it takes so much longer...
QUERY PLAN
now finally after a long time i have the query plan for the whole filled table.
I hope somebody can explain me why it takes so much longer...
QUERY PLAN
---
now finally after a long time i have the query plan for the whole filled
table. I hope somebody can explain me why it takes so much longer...
QUERY PLAN
-
On 7/14/06, Christian Rengstl
<[EMAIL PROTECTED]> wrote:
Hi,
somehow my reply yesterday got lost, but nevertheless here comes the
explain analyze again. It's the explain of the operation that causes
this huge performance discrepancy. Unfortunately i had to perform the
explain analyze with an emp
Hi,
somehow my reply yesterday got lost, but nevertheless here comes the
explain analyze again. It's the explain of the operation that causes
this huge performance discrepancy. Unfortunately i had to perform the
explain analyze with an empty temp_table, because after the whole
operation is done, i
On 7/13/06, Christian Rengstl
<[EMAIL PROTECTED]> wrote:
Good morning list,
the following query takes about 15 to 20 minutes for around 2 million lines in
the file myfile.txt, but with 8 million lines it takes around 5 hours and i
just don't understand why there is such a huge discrepancy in p
Good morning list,
the following query takes about 15 to 20 minutes for around 2 million lines in
the file myfile.txt, but with 8 million lines it takes around 5 hours and i
just don't understand why there is such a huge discrepancy in performance.
COPY public.temp_table FROM 'myfile.txt' WITH
Hi,
> Generally you want '=' conditions on the leftmost index keys; any
> inequality or range constraint should be on the rightmost
> keys. You can see this by thinking about the range of index entries that
> the scan will have to pass over.
I see. Just like in your earlier example, where you red
Hi Tom,
> Good, but you're not there yet --- the Sort step shouldn't be there at
> all. You've still got some inconsistency between the ORDER BY and the
> index. Check my example again.
yes yes I missed that, sorry. Now don't mention the performance because I
couldn' see anything but the result.
=?iso-8859-2?Q?Egy=FCd_Csaba?= <[EMAIL PROTECTED]> writes:
> Limit (cost=30.28..30.28 rows=1 width=58) (actual time=0.19..0.19 rows=1
> loops=1)
> -> Sort (cost=30.28..30.30 rows=7 width=58) (actual time=0.18..0.18
> rows=2 loops=1)
> Sort Key: stockid, productid, changeid, date, "time
TECTED]
> Cc: 'Alvaro Herrera'; '[EMAIL PROTECTED] (E-mail)'
> Subject: Re: [GENERAL] Performance problem on RH7.1
>
>
> =?iso-8859-2?Q?Egy=FCd_Csaba?= <[EMAIL PROTECTED]> writes:
> >> I'd also suggest dropping the EXECUTE approach, as this is
&g
=?iso-8859-2?Q?Egy=FCd_Csaba?= <[EMAIL PROTECTED]> writes:
>> I'd also suggest dropping the EXECUTE approach, as this is costing you
>> a re-plan on every call without buying much of anything.
> Do you mean I should use PERFORM instead? Or what else?
> Do you mean the "for R in execute" statements
> The major time sink is clearly here:
>
> > -> Index Scan using t_stockchanges_fullindex on
> t_stockchanges
> > (cost=0.00..28.74 rows=7 width=46)
> > (actual time=0.14..9.03 rows=6 loops=1)
> >Index Cond: ((date <= '2004.06.28'::bpchar)
> AND (stockid = 1)
=?iso-8859-1?Q?Egy=FCd_Csaba?= <[EMAIL PROTECTED]> writes:
> It is strange that the laptop substantially faster then the server. The
> get_stock* functions are executed 2-3 times faster.
So what do those stored procedures do exactly?
What it smells like to me is a bad plan for a query executed in
hing
wrong.
Thank you all.
Best regards,
-- Csaba Együd
> -Original Message-
> From: Alvaro Herrera [mailto:[EMAIL PROTECTED]
> Sent: 2004. június 27. 3:38
> To: Együd Csaba
> Cc: [EMAIL PROTECTED] (E-mail)
> Subject: Re: [GENERAL] Performance problem on RH7.1
>
>
&g
On Sat, Jun 26, 2004 at 12:16:17PM +0200, Együd Csaba wrote:
> I've a problem with the perfprmance of the production environment.
> I've two db servers. One on my laptop computer (2Ghz, 1GB, WinXP, Cygwin,
> Postgres 7.3.4) and one on a production server (2GHz, 1GB, Ultra SCSI,
> RH7.1, Postgres 7
On Sat, 2004-06-26 at 04:16, EgyÃd Csaba wrote:
> Hi All,
> I've a problem with the perfprmance of the production environment.
> I've two db servers. One on my laptop computer (2Ghz, 1GB, WinXP, Cygwin,
> Postgres 7.3.4) and one on a production server (2GHz, 1GB, Ultra SCSI,
> RH7.1, Postgres 7.3.2
Howard, Steven (US - Tulsa) wrote:
I have created a web app that stores and displays all the messages from
my database maintenance jobs that run each night. The web app uses Java
servlets and has PostgreSQL 7.0 as the back end.
When the user requests the first page, he gets a list of all the ser
Tulsa)
Cc: pgsql-general @ postgresql . org
Subject: Re: [GENERAL] Performance problem with correlated sub-query
On 29/04/2004 14:34 "Howard, Steven (US - Tulsa)" wrote:
> I have created a web app that stores and displays all the messages
from
> my database maintenance jobs that ru
On 29/04/2004 14:34 "Howard, Steven (US - Tulsa)" wrote:
I have created a web app that stores and displays all the messages from
my database maintenance jobs that run each night. The web app uses Java
servlets and has PostgreSQL 7.0 as the back end.
7.0? That's positively ancient!
When the user req
On Thu, 29 Apr 2004, Howard, Steven (US - Tulsa) wrote:
> I have created a web app that stores and displays all the messages from
> my database maintenance jobs that run each night. The web app uses Java
> servlets and has PostgreSQL 7.0 as the back end.
Step 1 is upgrade. ;)
> However, if the w
Howard, Steven (US - Tulsa) wrote:
select servername, databasename, message from messages o where
o.date_of_msg = (select max(date_of_msg) from messages i where
i.servername = o.servername);
And this is a dog. It takes 15 – 20 minutes to execute the
query (there are about 200,000 rows in the table)
I have created a web app that stores and displays all the
messages from my database maintenance jobs that run each night. The web app
uses Java servlets and has PostgreSQL 7.0 as the back end.
When the user requests the first page, he gets a list of all
the servers with maintenance reco
hello again,
i did some mistakes, site-id was wrong in my simple select querys.
(interesting that no matches would take double time on 7.4)
here are the 'right' results, showing to me postgres 7.4 is slightly
slower with simple querys (but 1000-times faster with more complex
querys ;-)
wit
hello,
>Richard Huxton wrote:
>What I suggest:
>1. Compare the two postgresql.conf files and any other config settings and
>make sure you know what differences there are and why.
>2. Identify what queries seem to be the cause of the problem, and pick one you
>think is a good example.
>3. VA
On Saturday 17 April 2004 13:23, Development - multi.art.studio wrote:
> hello everyone,
>
> Richard Huxton wrote:
> >It's not using the date index because it's using the id index - there's
> > only 10 matches, so that looks like a good choice to me. It takes less
> > than 1ms, so I'm not sure this
Hello,
You are probably missing a step some where... I know you got your data
imported but you might try
using the pg_dump from 7.4.2 to grab the 7.1 database and import from
there. It sounds to me like you
are missing an index or something.
Also it would help if we new your data structure, if
Hello everyone,
i just upgraded my old postgres-database from version 7.1 to 7.4.2.
i dumped out my 7.1 database (with pg_dump from 7.1) as an sql-file with
copy-commands and to one file using insert-statements.
after initalizing and starting postgres 7.4 on a different port and
datadirectory,
On Thu, 2001-10-11 at 00:03, Feite Brekeveld wrote:
> Just, reading it and I always enter these kind of queries like:
>
> select i.isbn,
> t.vendor,
> i.store,
> i.qty
> from bk_inv i,
> bk_title t
> where
>t.vendor = '01672708' and
>i.is
David Link wrote:
Just, reading it and I always enter these kind of queries like:
select i.isbn,
t.vendor,
i.store,
i.qty
from bk_inv i,
bk_title t
where
t.vendor = '01672708' and
i.isbn = t.isbn;
Don't know if it makes a difference but I
I'm new to PG but this just seems wrong. Can someone take a look:
.---. .---.
| bk_inv| | bk_title |
|---| |---|
| isbn |<--->| isbn |
| store | | vendor|
| qty | | |
| week | `-
Hi,
I had the following problem: I have a table with around 5Mio entries.
I joined this table with a table with about 50 entries. Doing a insert into
another_table . I am sure the select-statement might be
optimized but after 6 days of running I had to reboot the machine for other
reasons. The Ma
Have you done a vacuum analyze on the database? And what
does explain show for the query?
On Wed, 24 Jan 2001, Andreas Rieke wrote:
>
> SELECT DISTINCT level_1_de from property_list pl, properties p,
> materials m where m.oid=p.material_oid and p.property_list_oid=pl.oid
> and m.is_new=FALSE;
Hi there,
maybe anyone can help me with the following problem.
Using PostgreSQL 7.03 on a Suse 6.3 Linux (kernel 2.2.13) with a P3/550
MHz,
I get a performance problem with a select command which takes up to 10
seconds.
The following tables are involved in this command:
CREATE TABLE "property_li
90 matches
Mail list logo