Hi All,
I have a poor performance SQL as following. The table has about 200M
records, each employee have average 100 records. The query lasts about
3 hours. All I want is to update the flag for highest version of each
client's record. Any suggestion is welcome!
Thanks,
Mike
==
0:00.19 postmaster
Thanks for your help in advance,
Mike
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
So your update doesn't take long to run during off-peak times, so
basically your options are:
1. Optimize your postgresql.conf settings or upgrade to the latest
version of PostgreSQL.
2. Redesign your forum code so it can scale better.
3. Upgrade your servers hardware as it may be overloaded.
I
So your update doesn't take long to run during off-peak times, so
basically your options are:
1. Optimize your postgresql.conf settings or upgrade to the latest
version of PostgreSQL.
2. Redesign your forum code so it can scale better.
3. Upgrade your servers hardware as it may be overloaded.
I
shrink the time for the first query on any particular set of words,
and it would make everything else faster as a nice side effect. What
does your disk layout look like now?
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org
--
tement instead of COPY.
The hardware is a Dell dual Xeon system, the disks are mirrored SATA
drives with write buffering turned off.
--
Mike Nolan
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
, which is about 4200 transactions/second.
--
Mike Nolan
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
ated to various tasks?
shared buffers: 15000
> What else is happening on your system?
Nothing on mine.
> Tell us more about disk set-up and other hardware related things.
6-disk RAID10 on a Compaq SmartArray 6404 with 256M BB cache, WAL on
2-disk mirror on built in SmartArray5 controlle
L585 with 16G of RAM and an HP SCSI array, and
I have absolutely zero complaints. :)
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
n any list wander on so
aimlessly for so long.
Please, mommy, make it stop!
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
larly load several
table totaling around 50M rows with a single COPY per table. I drop
(actually, this is during DB reload, so I don't yet create...) all
fkeys, constraints and indexes and the data loads in a matter of 5
minutes or so.
Hope that helps!
--
Mike Rylander
[EMAIL PROTECT
Hello,
I have a table collecting stats that shows 5 Index Tuples Fetched but no Index
Scans. Should there not be at least one Index Scan showing in the stats?
Mike
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL
> This can often be called for. I'm working on a 400GB data warehouse right
> now, and almost *all* of our queries run from materialized aggregate tables.
I thought that was pretty much the definition of data warehousing! :-)
--
Mike Nolan
---(end o
642
pages swapped 128
Swap performance: 0 attempts 0 successes
-- Semaphore Status
used arrays = 7
allocated semaphores = 119
-- Messages: Status
allocated queues = 0
used headers = 0
used space = 0 bytes
Did you perhaps disable spinlocks when compiling PG?
--
Mi
ingly low overhead.
My $0.02, hope it helps.
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org
---(end of broadcast)---
TIP 6: explain analyze is your friend
ster though and did pick up the correct child table. It was also a
bitmap scan on the index IIRC.Would I be better off creating many partial indexes instead of multiple tables AND multiple indexes?Am I using a horrid method for partitioning the data? (% 10)
Should there be that big of an improvement for multiple tables given that all the data is still stored on the same filesystem?
Any advice on table splitting much appreciated. Cheers,Mike C.
On 12/12/05, Tom Lane <[EMAIL PROTECTED]> wrote:
Mike C <[EMAIL PROTECTED]> writes:> CLUSTER on PC_TRAFFIC_IDX3 gives me significantly improved performance:How can you tell? Neither of these are EXPLAIN ANALYZE output.
regards,
tom lane
Sorry that'
is
a bit pricey for the budget).
Cheers,
Mike
8 | Sets the number of disk-
page buffers in shared memory for WAL.
wal_sync_method |
fsync | Selects the method used
for forcing WAL updates out to disk.
work_mem|
1024| Sets the maximum memory
to be used for query workspaces.
zero_damaged_pages | off
Thanks in advance for any help you can offer on this problem.
-Mike
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
be anymore elegant.
-Mike
On Feb 16, 2007, at 9:46 AM, Alvaro Herrera wrote:
Gauri Kanekar escribió:
I want the planner to ignore a specific index.
I am testing some query output. For that purpose i dont want the
index.
I that possible to ignore a index by the planner.
Sure:
BEGIN
DROP
o force it into a filter, but it's a messy hack. I've tried
ordering the joins in the the most efficent way with a
join_collapse_limit of 1, but it still does uses this index in
parallel with searching an index on another table (i guess the
planner figures it's saving some ti
project. It would have saved us a lot of time by having a
> configuration tool in the beginning. I am willing to make this a very
> high priority for my team.
>
>
>
> Thanks,
>
>
>
> Lance Campbell
>
> Project Manager/Software Architect
>
> Web Services at Public Affairs
>
> University of Illinois
>
> 217.333.0382
>
> http://webservices.uiuc.edu
>
>
>
>
--
Mike Benoit <[EMAIL PROTECTED]>
signature.asc
Description: This is a digitally signed message part
Oops. Realized I posted the wrong SQL and EXPLAIN ANALYZE results.
Also forgot to mention that my "server" has 1.5 GB memory.
SELECT "PrintSamples"."MachineID", "PrintSamples"."PrintCopyID",
"tblColors"."ColorID", avg("ParameterValues"."ParameterValue") AS
"Mottle_NMF"
FROM "AnalysisModules"
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Friday, August 10, 2007 5:44 PM
> To: Relyea, Mike
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Help optimize view
>
> Try increasing join_collapse_limit --- you have ju
> >>> On Mon, Aug 13, 2007 at 10:35 AM, in message
> <[EMAIL PROTECTED]
> .net>, "Relyea, Mike" <[EMAIL PROTECTED]> wrote:
> > I'm running 8.2.4 on Windows XP with 1.5 GB memory.
> > shared_buffers = 12288
> > effective_cache_size =
> >>> On Mon, Aug 13, 2007 at 1:48 PM, in message
> <[EMAIL PROTECTED]
> .net>, "Relyea, Mike" <[EMAIL PROTECTED]> wrote:
> > I've increased shared_buffers to 128MB, and restarted the
> server. My
> > total run time didn't
I'm have the following view as part of a larger, aggregate query that is
running slower than I'd like. There are 4 views total, each very
similar to this one. Each of the views is then left joined with data
from some other tables to give me the final result that I'm looking for.
I'm hoping that
3
> AND "PrintSamples"."TestPatternName" LIKE 'IQAF-TP8%';
The query I really want to run is several times larger than this. I
didn't think people would want to wade through pages and pages worth of
SQL and then explain analyze results - espec
aused this to be set in a non default manner if possible.
#seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 3.0 # same scale as above
Why does the smaller limit cause it to skip the index?
Is there a way to help the planner choose the better plan?
Much appreciated,
Mike
, this particular table is about 1.6GB
and growing. Currently there are jobs that query from this table every
minute.
Thanks again
Mike
On Wed, Jan 5, 2011 at 5:10 PM, Kevin Grittner
wrote:
> Mike Broers wrote:
>
> > Hello performance, I need help explaining the performance of
, table
Try
order by created_at+0
On Thu, Jan 6, 2011 at 3:36 PM, Mike Broers wrote:
> Thanks for the assistance.
>
> Here is an explain analyze of the query with the problem limit:
>
> production=# explain analyze select * from landing_page.messages where
> ((messages.topi
Thanks Robert, this is what I was looking for. I will try these suggestions
and follow up if any of them are the silver bullet.
On Fri, Jan 14, 2011 at 7:11 AM, Robert Haas wrote:
> On Thu, Jan 6, 2011 at 4:36 PM, Mike Broers wrote:
> > Thanks for the assistance.
> > Here is an
em.
It's difficult to just "try" various ideas because each attempt takes a
full day to test. Real
experience is needed here!
Thanks much,
Mike
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
process.
I can confirm this behavior after looking at my multipathed fibre
channel SAN. To the best of my knowledge, the multipathing code uses
the same underlying I/O code as the Linux SW RAID logic.
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http
is second.
-- Mike Quinn
-- the worst way --
EXPLAIN ANALYZE
SELECT
Locts.id,
Commtypes.name
FROM
Growers
,
Locts
,
Crops
,
Commtypes
WHERE
Growers.id = '0401606'
AND
-- Commtypes.number = Crops.Commtype
Crops.Commtype = Commtypes.number
AND
Locts.number = Crops.Loct
-- Crops.Loc
The datatype of the join columns is a user defined type and there are no
commutators defined. I will fix that and retest. Thanks for the
insight.
Mike Quinn
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http
dex Scan using crops_loct on crops (cost=0.00..118.53
rows=42 width=24) (actual time=0.007..0.018 rows=8 loops=9)
Index Cond: (crops.loct = "outer".number)
-> Index Scan using commtypes_number_key on commtypes
(cost=0.00..4.19 rows=1 width=26) (actual time=0.005..0.0
ed the clients a few times without success. I've
been running a long time on 7.2 with essentially the same
configuration (minus pg_autovacuum) without any problems
Thanks for any help,
-Mike
---(end of broadcast)---
TIP 8: explain analyze is your friend
I was getting
two lines in "top" (on Linux) with 99% utilization---I assume with two
runaway queries.)
Thanks for your help!
-Mike
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
ammers working
on tuning issues for SQL Server than PostgreSQL has working on the
whole project.
--
Mike Nolan
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
production server.
The same query a while later might respond quickly again.
I'm not sure where to look for the delay, either, and it is intermittent
enough that I'm not even sure what monitoring techniques to use.
--
Mike Nolan
---(end of broadcast)---
le-clicks even
when they only want one click.
--
Mike Nolan
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
at match the structure of
the tables they are logging.
2. Write a trigger function that converts columns to something you can
store in a common log table. (I've not found a way to do this without
inserting one row for each column being logged, though.)
--
Mike Nolan
---
Tom Lane wrote:
> Christopher Browne <[EMAIL PROTECTED]> writes:
>> A long time ago, in a galaxy far, farpliers [EMAIL PROTECTED] (Patrick
>> Hatcher) wrote:
>>> Answered my own question. I gave up the vacuum full after 150 mins. I
>>> was able to export to a file, vacuum full the empty table, a
this in the past, and it doesn't
take very much code modification.
--
Mike Benoit <[EMAIL PROTECTED]>
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
On Fri, 2004-08-06 at 23:18 +, Martin Foster wrote:
> Mike Benoit wrote:
>
> > On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:
> >
> >
> >>>The queries themselves are simple, normally drawing information from one
> >>>table with f
On 15 Sep 2004 23:55:24 -0400, Greg Stark <[EMAIL PROTECTED]> wrote:
>
> "J. Andrew Rogers" <[EMAIL PROTECTED]> writes:
>
> > We do something very similar, also using table inheritance
>
> I have a suspicion postgres's table inheritance will end up serving as a good
> base for a partitioned tabl
I just ran a COPY of a million records several times, and each time I
ran it it ran apparently exponentially slower. If I do an insert of
10 million records, even with 2 indexes (same table) it doesn't appear
to slow down at all. Any ideas?
- Mike H.
(I apologize for the ^Hs)
Script st
On Fri, 15 Oct 2004 11:54:44 -0500, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> My basic question to the community is "is PostgreSQL approximately as fast
> as Oracle?"
My personal experience comparing PG to Oracle is across platforms,
Oracle on Sun/Solaris (2.7, quad-proc R440) and PG on Intel
cache with Postgres files before
> you get to work ;-)
>
> regards, tom lane
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
> joining column's datatypes do not mat
done before,
but I've note found the magic spell. Of course, I may just be barking
up the wrong tree altogether...
Thanks in advance!
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
---(end of broadcast)---
TIP 8: explain analyze is your friend
On Wed, 10 Nov 2004 18:50:28 -0800 (PST), George Essig
<[EMAIL PROTECTED]> wrote:
> Mike Rylander wrote:
>
> > I want to create a multi-column index across the tsvector and classification
> > columns as that should help with selectivity. But because there is no
>
; --
> greg
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faqs/FAQ.html
>
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database
an using i_inventories1 on _inventories i
(cost=0.00..5.07 rows=1 width=16) (actual time=0.025..0.029 rows=1 loops=163561)
Index Cond: (i.inventory = "outer".inventory)
-> Index Scan using i_bins1 on _bins b (cost=0.00..3.40 rows=1
width=16) (actual ti
Mike Mascari wrote:
I have the following view:
create or replace view market.p_areas as
select a.*
from _areas a
where a.area in (
select b.area
from _bins b, _inventories i, _offers o, _pricemembers p
where b.bin = i.bin and
i.inventory = o.inventory and
o.pricegroup = p.pricegroup and
Tom Lane wrote:
Mike Mascari <[EMAIL PROTECTED]> writes:
When I query the view with a simple filter, I get:
explain analyze select * from p_areas where deactive is null;
The problem seems to be here:
-> Seq Scan on _areas a (cost=0.00..2.48 rows=1 width=163) (actual
time=0.037..0
this discriminatory policy toward null end for 8.0?
Mike Mascari
---(end of broadcast)---
TIP 8: explain analyze is your friend
lyze each 5
> seconds'.
>
See this thread
(http://archives.postgresql.org/pgsql-hackers/2004-11/msg00985.php and
http://archives.postgresql.org/pgsql-hackers/2004-11/msg01080.php) for
an ongoing discussion of the issue.
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- P
ute, PG with linux-amd64 on HP/Opterons, as
opposed to the E20K monster that was originally spec'd out.
Mail me direct if you want the full spec list on this beast. And if
there is a ready-made benchmark anyone would like me to run, just drop
me a note.
--
Mike Ryland
;
> I don't know this for a fact, it is just what one of their
> "ahem" sales guys told me.
>
At least in that case they were being accurate. ;)
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org
-
---
Limit (cost=0.00..16.19 rows=9 width=87) (actual time=0.026..0.035
rows=7 loops=1)
-> Index Scan using metarecord_field_entry_source_idx on
metarecord_field_entry (cost=0.00..16.19 rows=9 width=87) (actual
time=0.025..0
done in a
batch rather than individual transactions but the planner doesn't recognize
that. When I run option a through the planner I have to highlight each insert
separately since the planner stops executing after the first ; it comes across.
Mike
---(end of
On Fri, 10 Dec 2004 21:40:18 -0800, Josh Berkus <[EMAIL PROTECTED]> wrote:
> Mike,
> The fact that the estimator knows that the LIMIT is pointless because there
> are less rows in the subselect than the LIMIT will return is not something we
> want to count on; sometimes
bloat - either heap
tuples or index tuples.
Mike Mascari
---(end of broadcast)---
TIP 8: explain analyze is your friend
;2005-01-11
17:53:16.720749-05'::timestamp with time zone) AND (customer_id = 100))
Total runtime: 11.039 ms
(5 rows)
[EMAIL PROTECTED] select version();
PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC
i686-pc-linux-gnu-gcc (GCC) 3.4.0 20040204 (prerelease)
(1 row)
Hope that helps,
Mike Mascari
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
e stats :(
> 3. Continue to tweak the sql behind our app.
Short of an Opteron based system, this is by far your best bet.
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org
---(end of broadcast)--
fficult to back up in court without precedence. Is this just a
> deterrent, or is it real?
>
> -- Mitch
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
--
Mike Benoit <[EMAIL PROTECTED]>
signature.asc
Description: This is a digitally signed message part
ind of anoyes me a bit :)
>
> Does anyone of you have some good ideas on how speed up such queries on
> huge tables?
>
> regards
> rune
>
>
>
>
--
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
know. Unless I'm mistaken, he (co)wrote tsearch2.
Other than shared buffers, I can't imagine what could be causing that
kind of slowness. EXPLAIN ANALYZE, please?
As an example of what I think you *should* be seeing, I have a similar
box (4 procs, but that doesn't matter for one q
at your settings are, howmuch memory you have and such?
16G of RAM on a dedicated machine.
shared_buffers = 15000 # min 16, at least max_connections*2, 8KB each
work_mem = 10240# min 64, size in KB
maintenance_work_mem = 100 # min 1024, size in KB
# big m_w_m for load
ey also only pay for used capacity from their disk
vendor.
This is not very green as you need to buy more disks for the same amount
of data and its liable to upset your purchasing department who won't
understand why you don't want to fill your disks up.
Mike
[presumably the empty-disk effect could also be achieved by partitioning, say
25% of the drive for the database, and 75% empty partition. But in fact, you
could use that "low performance 75%" for rarely-used or static data, such as
the output from pg_dump, that is written during non-peak times]
is the performance/memory penalty on switching users in the
same connection (connections are reused of course)?
- will it hurt the cache?
- are prepared statements kept per user or per connection?
- is the query planner global or somehow tied to users?
I'd be glad to hear any opinions/su
I'd be glad to hear any opinions/suggestions.
Many thanks to everyone who responded!
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n CPU load?
You can use /usr/bin/top to obtain most of this information.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
hould help.
Cheers,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
case 4G
for shared buffers is good.
Actually, I take back my words on swap, too. 200M swapped is less
important when you have a plenty of memory.
Regards,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
Scott Marlowe wrote:
The postgres shared cache is at 4G, is that too big?
Not for a machine with 32Gig of ram.
He could even add some more.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
production servers and many batch
jobs I run put all 8 cores at 90% for extended periods. Since that
machine is normally doing a lot of smaller cached queries, it hardly
even notices.
The OP's machine is doing a lot of write ops, which is different.
Yes, more hard drives / better cac
Scott Marlowe wrote:
Also think about it, the OP has 8G of swap and 30Gig cached. How /
why would you be caching 30Gigs worth of data when there's only 8G to
cache anyway?
You're right, I have misread it again :-)
Cheers,
Mike
--
Sent via pgsql-performance mailing l
for clearing that out.
It's normal, and shouldn't worry anybody. In fact it's a good sign
that you're not using way too much memory for any one process
It says exactly the opposite.
This time I agree :-)
Cheers,
Mike
--
Sent via pgsql-performance mailing list (pg
_ratio = dirty_background_ratio / 2 if that's not the case. Also,
how dirty_ratio could be less than 5 if 5 is the minimal value?
Regards,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
kpoint fsync to finish.
Thanks a lot, this is invaluable information.
Regards,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi Rui,
i have this query (i think is a simple one)
Could you EXPLAIN ANALYZE the query and show the results please?
Thanks,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
e this
type of explosive row multiplication.
Although I don't quite understand the purpose of the query, I don't
think you need all those OUTER joins.
Regards,
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
h
re is no alternative, as Scott said.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
hing is really badly broken.
I agree with you that PostgreSQL is doing different level of caching,
I just wonder if there is any way to speed up PostgreSQL in this
scenario,
This is what EXPLAIN ANALYZE for. Could you post the results please?
Cheers,
Mike
--
Sent via pgsql-performance
using straight
joins instead. Also, try to filter things before joining, not after.
Correct me if I'm wrong, but in this particular case this seems pretty
much possible.
Cheers,
Mike
ning wrote:
Hi Mike,
Thank you for your explanation.
The "explain analyze" command used is as
Hi Thomas,
How is 'messungen_v_dat_2009_04_13_gtyp_minute_tag_idx' defined? What is
the row count for the table?
Mike
Hi,
subject is the following type of query needed in a function to select
data:
SELECT ' 13.04.2009 12:
such a layout :-)
However, expect very slow (hours) pg_dump, \dt and everything else that
requires reading schema information for the whole db.
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresq
schemas. It's not a conventional web
app we run :-)
I'm not arguing this is a bit extremal approach, but if one is forced to
go this path, it's quite walkable ;-)
Mike
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
build indexes on a regular basis,
even if you move that field into a separate table.
--
Mike Nolan
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
obscenely long time to complete? The 175s (and even 216s) for the
select seems reasonable given the size of the tables, but not 3000s to
update the same rows. The processor (AMD 1.3GHz) is 90%+ utilization for
most of the execution time.
I can post more information if it would be helpful, but this post is
long enough already.
TIA, and happy new year.
-mike
--
Mike Glover
Key ID BFD19F2C <[EMAIL PROTECTED]>
pgp0.pgp
Description: PGP signature
Tom-
Thanks for the quick response. More details are inline.
-mike
On Thu, 01 Jan 2004 23:06:11 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> Mike Glover <[EMAIL PROTECTED]> writes:
> AFAICS these plans are identical, and therefore the difference in
> runtime must be
ory available, you're swapping intermediate sort pages to disk --
a lot. Try the query with sort_mem set to 75MB (to do the entire sort in
memory).
-mike
> Cheers,
>
> Bradley.
>
>
> ---(end of
> broadcast)--- TIP 8: exp
when you've done your homework".
Can they call you at the unemployment office?
--
Mike Nolan
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so
ing (barely) acceptable performance from a single
15k U160 SCSI disk, but db size and activity are growing quickly.
I've got more disks and a battery-backed LSI card on order.
-mike
--
Mike Glover
GPG Key ID BFD19F2C <[EMAIL PROTECTED]>
pgp0.pgp
Description: PGP signature
> Frankly, the only reason to use anything other than TEXT is compatibility with
> other databases and applications.
You don't consider a requirement that a field be no longer than a
certain length a reason not to use TEXT?
--
Mike Nolan
---(end o
rk as specified, I don't think
the standard cares much about what's happening behind the curtain.
--
Mike Nolan
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
> Actually, I don't. Good reason to have a check constraint on it though
> (hint, check constraints can be changed while column types cannot be, at
> this moment).
Is there a way to copy a table INCLUDING the check constraints? If not,
then that information is lost, unlike varch
1 - 100 of 161 matches
Mail list logo