Tom Tamulewicz wrote:
SELECT p.party_id, p.first_name, p.last_name, pli.address1,
pli.city, pli.state FROM customer as p JOIN address as pli ON (
p.party_id = pli.party_id ) WHERE ( p.void_flag IS N
From: Michael Glaesemann <[EMAIL PROTECTED]>To: Tom Tamulewicz <[EMAIL PROTECTED]>CC: pgsql-performance@postgresql.orgSubject: Re: [PERFORM] Slow join queryDate: Fri, 22 Jun 2007 14:51:32 -0500>>On Jun 22, 2007, at 13:32 , Tom Tamulewicz wrote:>>( p.void_flag IS NULL OR p.void_flag = false
Hello Dimitri,
>Rainer, seeking psqlODBC code source it seems to work in similar way
>and have an option "SQL_ROWSET_SIZE" to execute FETCH query in the
>same way as "FETCH_COUNT" in psql. Try to set it to 100 and let's see
>if it'll be better...
But that is only for bulk fetching with SQLExtende
[Please don't top post as it makes the discussion more difficult to
follow.]
On Jun 22, 2007, at 16:25 , Tom Tamulewicz wrote:
The explain is as follows...
EXPLAIN ANALYZE, please. (And for convenience, it helps if you
include the query :) )
Michael Glaesemann
grzm seespotcode net
-
The explain is as follows...
QUERY PLAN
On Jun 22, 2007, at 13:32 , Tom Tamulewicz wrote:
( p.void_flag IS NULL OR p.void_flag = false )
Just a note: you can rewrite (a IS NULL or a = false) as (a IS NOT
TRUE). Shouldn't affect performance, but might make your query easier
to read.
What's the EXPLAIN ANALYZE output for this quer
On Fri, 22 Jun 2007, Sabin Coanda wrote:
Instead of (or in addition to) configure dozens of settings, what do you
say about a feedback adjustable control based on the existing system
statistics and parsing logs (e.g
http://pgfouine.projects.postgresql.org/index.html ) ?
something like this w
I have a query that runs about 30-50 seconds. The query is a join between 2 tables (customer and address), each table with about 400,000 rows. My customer table has fields like first_name and last_name where the address table has city, state, etc. I'm using "like" in most of the query columns, w
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Fri, 2007-06-22 at 13:12 -0400, Tom Lane wrote:
>> If you saw a problem I'd be inclined to question whether
>> there is some upstream component (OS or disk controller) that's
>> reordering writes.
> Given thats exactly what they do, constantly, I don'
On Fri, 2007-06-22 at 13:12 -0400, Tom Lane wrote:
> Dan Gorman <[EMAIL PROTECTED]> writes:
> > This snapshot is done at the LUN (filer) level, postgres is un-aware
> > we're creating a backup, so I'm not sure how pg_start_backup() plays
> > into this ...
>
> That method works too, as long as
Dan Gorman <[EMAIL PROTECTED]> writes:
> This snapshot is done at the LUN (filer) level, postgres is un-aware
> we're creating a backup, so I'm not sure how pg_start_backup() plays
> into this ...
That method works too, as long as you snapshot both the data files and
WAL files --- when you sta
Toru SHIMOGAKI wrote:
> Joshua D. Drake wrote:
>
>>> - If we don't use hardware level snapshot operation, it takes long time to
>>> take
>>> a large backup data, and a lot of full-page-written WAL files are made.
>> Does it? I have done it with fairly large databases without issue.
>
> You mea
Rainer, seeking psqlODBC code source it seems to work in similar way
and have an option "SQL_ROWSET_SIZE" to execute FETCH query in the
same way as "FETCH_COUNT" in psql. Try to set it to 100 and let's see
if it'll be better...
Rgds,
-Dimitri
On 6/22/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
>> So, if any data is written during taking snapshot, we can't assurance data
>> correctness *strictly* .
That sounds nothing like what I've heard called a "snapshot" before. Some
"filesystems" which aren't really filesystems but are also storage layer
drivers like Veritas (and ZFS?) allow you to
Hello Joshua,
>That opens up some questions. What ODBC driver are you using (with exact
>version please).
psqlODBC 8.2.4.2 (build locally).
I have restored the 8.2.4.0 from the official msi installer, but the results
are the same.
Rainer
---(end of broadcast)--
Rainer Bauer wrote:
Hello Dimitri,
Hope it's more clear now and at least there is a choice :))
As well, if your query result will be 500 (for ex.) I think the
difference will be less important between non-CURSOR and "FETCH 500"
execution...
The problem is that I am using ODBC and not libpq di
Hello Dimitri,
>Hope it's more clear now and at least there is a choice :))
>As well, if your query result will be 500 (for ex.) I think the
>difference will be less important between non-CURSOR and "FETCH 500"
>execution...
The problem is that I am using ODBC and not libpq directly.
I will have
Steven Flatt escribió:
> Thanks everyone. It appears that we had hacked the 502.pgsql script for our
> 8.1 build to disable the daily vacuum. I was not aware of this when
> building and upgrading to 8.2.
>
> So it looks like for the past two weeks, that 36 hour db-wide vacuum has
> been running
Toru SHIMOGAKI wrote:
Steve Atkins wrote:
- When we take a PITR base backup with hardware level snapshot operation
(not filesystem level) which a lot of storage vender provide, the
backup data
can be corrupted as Dan said. During recovery we can't even read it,
especially if meta-data
I did not find a solution so far; and for bulk data transfers I now
>programmed a workaround.
But that is surely based on some component installed on the server,
isn't
it?
Correct. I use a pyro-remote server. On request this remote server copies
the relevant rows into a temporary table, u
""Campbell, Lance"" <[EMAIL PROTECTED]> wrote in message news:[EMAIL
PROTECTED]
Below is a link to the HTML JavaScript configuration page I am creating:
http://www.webservices.uiuc.edu/postgresql/
I had many suggestions. Based on the feedback I received, I put together the
i
PFC,
Correct. I use a pyro-remote server. On request this remote server copies
> the relevant rows into a temporary table, uses a copy_to Call to push
> them
> into a StringIO-Objekt (that's Pythons version of "In Memory File"),
> serializes that StringIO-Objekt, does a bz2-compression and trans
Hello Rainer,
initially I was surprised you did not match non-CURSOR time with FETCH
100, but then thinking little bit the explanation is very simple -
let's analyze what's going in both cases:
Without CURSOR:
1.) app calls PQexec() with "Query" and waiting for the result
2.) PG sends the resu
Greg Smith writes:
Unfortunately the existance of the RAID-6 capable Adaptec 2820SA proves
this isn't always the case.
For sata 3ware and Areca seem to perform well with raid 6 (from the few
posts I have read on the subject).
Don't know of SCSI controllers though.
On Fri, 2007-06-22 at 17:23 +0900, Toru SHIMOGAKI wrote:
> Dan Gorman wrote:
> > Here is an example. Most of the snap shots worked fine, but I did get
> > this once:
>
> Thank you for your example. I'd appreciate it if I'd get any responses;
> whether
> we should tackle the problem for 8.4?
If
Hello Dimitri,
>Let's stay optimist - at least now you know the main source of your problem!
>:))
>
>Let's see now with CURSOR...
>
>Firstly try this:
>munnin=>\timing
>munnin=>\set FETCH_COUNT 1;
>munnin=>select * from "tblItem";
>
>what's the time you see here? (I think your application is work
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Wasn't it select pg_start_backup('backuplabel');?
Andreas
Kurt Overberg wrote:
> You can use the psql command line to run:
>
> "select pg_start_backup();"
>
> ...then when you're done,
>
> "select pg_stop_backup();"
>
> if you want an example fro
You can use the psql command line to run:
"select pg_start_backup();"
...then when you're done,
"select pg_stop_backup();"
if you want an example from the unix command line:
psql -c "select pg_start_backup();" database_name
then
psql -c "select pg_stop_backup();" database_name
/kurt
On J
On Fri, 2007-06-22 at 02:32 -0400, Greg Smith wrote:
> On Thu, 21 Jun 2007, Scott Marlowe wrote:
>
> > Generally I agree, however, how about a first switch, for beginner /
> > intermediate / advanced.
>
> You're describing a perfectly reasonable approach for a second generation
> tool in this a
Ah okay. I understand now. So how can I signal postgres I'm about to
take a backup ? (read doc from previous email ? )
Regards,
Dan Gorman
On Jun 22, 2007, at 4:38 AM, Simon Riggs wrote:
On Fri, 2007-06-22 at 04:10 -0700, Dan Gorman wrote:
This snapshot is done at the LUN (filer) level, pos
On Fri, 2007-06-22 at 04:10 -0700, Dan Gorman wrote:
> This snapshot is done at the LUN (filer) level, postgres is un-aware
> we're creating a backup, so I'm not sure how pg_start_backup() plays
> into this ...
Postgres *is* completely unaware that you intend to take a backup, that
is *exactly
This snapshot is done at the LUN (filer) level, postgres is un-aware
we're creating a backup, so I'm not sure how pg_start_backup() plays
into this ...
Regards,
Dan Gorman
On Jun 22, 2007, at 3:55 AM, Simon Riggs wrote:
On Fri, 2007-06-22 at 11:30 +0900, Toru SHIMOGAKI wrote:
Tom Lane wro
On Fri, 2007-06-22 at 11:30 +0900, Toru SHIMOGAKI wrote:
> Tom Lane wrote:
> > Dan Gorman <[EMAIL PROTECTED]> writes:
> >>All of our databases are on NetApp storage and I have been looking
> >> at SnapMirror (PITR RO copy ) and FlexClone (near instant RW volume
> >> replica) for backing up
Rainer,
I did not find a solution so far; and for bulk data transfers I now
>programmed a workaround.
But that is surely based on some component installed on the server, isn't
it?
Correct. I use a pyro-remote server. On request this remote server copies
the relevant rows into a temporary tab
Dave Page wrote:
I don't see why pgAdmin should be slow though - it should be only
marginally slower than psql I would think (assuming there are no thinkos
in our code that none of use ever noticed).
Nevermind...
/D
---(end of broadcast)---
TIP
Rainer Bauer wrote:
It's not immediately clear why pgAdmin would have the same issue,
though, because AFAIK it doesn't rely on ODBC.
No it doesn't. That's the reason I used it to verify the behaviour.
But I remember Dave Page mentioning using a virtual list control to display
the results and t
Hello Dimitri,
>Rainer, but did you try initial query with FETCH_COUNT equal to 100?...
Yes I tried it with different values and it's like you suspected:
FETCH_COUNT 1 Time: 8642,000 ms
FETCH_COUNT 5 Time: 2360,000 ms
FETCH_COUNT 10 Time: 1563,000 ms
FETCH_COUNT 25 Time: 1329,000 ms
FETCH_COUN
Rainer, but did you try initial query with FETCH_COUNT equal to 100?...
Rgds,
-Dimitri
On 6/22/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
Hello Dimitri,
>Let's stay optimist - at least now you know the main source of your
problem! :))
>
>Let's see now with CURSOR...
>
>Firstly try this:
>munn
Hello Tom,
>This previous post says that someone else solved an ODBC
>performance problem with UseDeclareFetch=1:
I thought about that too, but enabling UseDeclareFetch will slow down the
query: it takes 30 seconds instead of 8.
>It's not immediately clear why pgAdmin would have the same issue,
Dan Gorman wrote:
Here is an example. Most of the snap shots worked fine, but I did get
this once:
Thank you for your example. I'd appreciate it if I'd get any responses; whether
we should tackle the problem for 8.4?
Regards,
--
Toru SHIMOGAKI<[EMAIL PROTECTED]>
NTT Open Source Software Ce
Tom,
seems to me the problem here is rather simple: current issue depends
completely on the low level 'implementation' of SELECT query in the
application. In case it's implemented with using of "DECLARE ...
CURSOR ..." and then "FETCH NEXT" by default (most common case) it
brings application into
Here is an example. Most of the snap shots worked fine, but I did get
this once:
Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [9-1] 2007-06-21
00:39:43 PDTLOG: redo done at 71/99870670
Jun 21 00:39:43 sfmedstorageha001 postgres[3506]: [10-1] 2007-06-21
00:39:43 PDTWARNING: page 28905
Joshua D. Drake wrote:
>> - If we don't use hardware level snapshot operation, it takes long time to
>> take
>> a large backup data, and a lot of full-page-written WAL files are made.
>
> Does it? I have done it with fairly large databases without issue.
You mean hardware snapshot? I know ta
Steve Atkins wrote:
- When we take a PITR base backup with hardware level snapshot operation
(not filesystem level) which a lot of storage vender provide, the
backup data
can be corrupted as Dan said. During recovery we can't even read it,
especially if meta-data was corrupted.
I can'
44 matches
Mail list logo