The row estimate is way off. Is autovacuum disabled?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ow to
interpret it in this case.
3) Any other advice, other than the things I listed above (I am aware of
using copy, ext3 tuning, multiple disks, tuning postgresql.conf
settings)?
Thanks in advance,
Jeremy Haile
#vmstat 2 20
procs ---memory-- ---swap-- -io --system--
--
> one other note, you probably don't want to use all the disks in a raid10
> array, you probably want to split a pair of them off into a seperate
> raid1 array and put your WAL on it.
Is a RAID 1 array of two disks sufficient for WAL? What's a typical
setup for a high performance PostgreSQL ins
easiest to read and wish
it performed well.
Jeremy Haile
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
NOT NULL) AND (nlogid > $1))
Total runtime: 156589.605 ms
On Tue, 19 Dec 2006 16:31:41 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > I have the following query which performs extremely slow:
> > select min(nlogid) as start_nlogid,
> >
t the best at
interpreting the explains. Why is this explain so much simpler than the
other query plan (with the subquery)?
On Tue, 19 Dec 2006 18:23:06 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > Here is the explain analyze output:
>
> Well, the row estimate
Makes sense. It is NOT executing the subquery more than once is it?
On Tue, 19 Dec 2006 20:02:35 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > Here's the query and explain analyze using the result of the sub-query
> > substituted:
> >
> > Q
different plan?
On Tue, 19 Dec 2006 20:02:35 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > Here's the query and explain analyze using the result of the sub-query
> > substituted:
> >
> > QUERY
> > explain analyze select min(nlogid) as
I created a 10GB partition for pg_xlog and ran out of disk space today
during a long running update. My checkpoint_segments is set to 12, but
there are 622 files in pg_xlog. What size should the pg_xlog partition
be?
Postmaster is currently not starting up (critical for my organization)
and re
. It seems like it can vary considerably depending on
how intensive your current transactions are. Is there a way to
determine a maximum?
On Fri, 22 Dec 2006 11:06:46 -0500, "Jeremy Haile" <[EMAIL PROTECTED]>
said:
> I created a 10GB partition for pg_xlog and ran out of disk s
3000ED
12/18/2006 08:52 PM archive_status
28 File(s)469,762,048 bytes
3 Dir(s) 10,206,756,864 bytes free
On Fri, 22 Dec 2006 17:02:43 +, "Simon Riggs"
<[EMAIL PROTECTED]> said:
> On Fri, 2006-12-22 at 11:52 -0500, Jeremy Hail
(roughly 80% of the rows) The transaction ran for a long
time and I assume is what caused the pg_xlog to fill up.
On Fri, 22 Dec 2006 17:36:39 +, "Simon Riggs"
<[EMAIL PROTECTED]> said:
> On Fri, 2006-12-22 at 12:30 -0500, Jeremy Haile wrote:
> > The archive_status
> > Once you free some space on the data partition and restart, you should
> > be good to go --- there will be no loss of committed transactions, since
> > all the operations are in pg_xlog. Might take a little while to replay
> > all that log though :-(
>
> Amazing that all works. What I did no
More specifically, you should set the noatime,data=writeback options in
fstab on ext3 partitions for best performance. Correct?
> it doesn't really belong here but ext3 has
> data journaled (data and meta data)
> ordered (meta data journald but data written before meta data (default))
> journald
I'm curious what parameters you guys typically *always* adjust on new
PostgreSQL installs.
I am working with a database that contains several large tables (10-20
million) and many smaller tables (hundreds of rows). My system has 2 GB
of RAM currently, although I will be upping it to 4GB soon.
:19 +, "Richard Huxton"
said:
> Jeremy Haile wrote:
> > I'm curious what parameters you guys typically *always* adjust on new
> > PostgreSQL installs.
>
> > The parameters that I almost always change when installing a new system
> > is shared_buf
iety of factors
- but I'd love some more advice on what good rule-of-thumb starting
points are for experimentation and how to evaluate whether the values
are set correctly. (in the case of temp_buffers and work_mem especially)
On Tue, 02 Jan 2007 18:49:54 +0000, "Richard Huxton"
sai
> So, on a 4 Gig machine you could divide 1G (25%) by the total possible
> connections, then again by the average number of sorts you'd expect per
> query / connection to get an idea.
Thanks for the advice. I'll experiment with higher work_mem settings,
as I am regularly doing sorts on large da
I am sure that this has been discussed before, but I can't seem to find
any recent posts. (I am running PostgreSQL 8.2)
I have always ran PostgreSQL on Linux in the past, but the company I am
currently working for uses Windows on all of their servers. I don't
have the luxury right now of running
Thanks for the recommendations. I wasn't familiar with those packages!
On Thu, 4 Jan 2007 00:46:32 +0100, "Dimitri Fontaine" <[EMAIL PROTECTED]>
said:
> Le jeudi 4 janvier 2007 00:18, Magnus Hagander a écrit :
> > But to get a good answer on if the difference is
> > significant enough to matter,
s of connections is
not a huge issue. However - it does have very large tables and
regularly queries and inserts into these tables. I insert several
million rows into 3 tables every day - and also delete about the same
amount.
On Thu, 04 Jan 2007 00:18:23 +0100, "Magnus Hagander"
I'm using 8.2. I don't know when I'll get a chance to run my own
benchmarks. (I don't currently have access to a Windows and Linux
server with similar hardware/configuration) But when/if I get a chance
to run them, I will post the results here.
Thanks for the feedback.
J
I am developing an application that has very predictable database
operations:
-inserts several thousand rows into 3 tables every 5 minutes. (table
contain around 10 million rows each)
-truncates and rebuilds aggregate tables of this data every 5 minutes.
(several thousand rows each)
-regu
back on?
On Tue, 09 Jan 2007 19:02:25 +0100, "Florian Weimer" <[EMAIL PROTECTED]>
said:
> * Jeremy Haile:
>
> > I'd like any performance advice, but my main concern is the amount of
> > time vacuum/analyze runs and its possible impact on the overall DB
&
I have a query made by joining two subqueries where the outer query
performing the join takes significantly longer to run than the two
subqueries.
The first subquery runs in 600ms. The seconds subquery runs in 700ms.
But the outer query takes 240 seconds to run! Both of the two
subqueries onl
27;2007-01-09
09:30:00'::timestamp without time
zone))
Total runtime: 675.638 ms
On Wed, 10 Jan 2007 12:15:44 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile" <[EM
ased on an unrelated join condition.
If I ever get it to happen again, I'll be more careful and repost if it
is a real issue. Thanks for pointing me in the right direction!
On Wed, 10 Jan 2007 13:38:15 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile&quo
like that. And of
course, if PostgreSQL doesn't cache query plans - this idea is bogus =)
On Wed, 10 Jan 2007 13:38:24 -0500, "Jeremy Haile" <[EMAIL PROTECTED]>
said:
> I'm pretty sure it didn't analyze in between - autovac is turned off
> and I ran the test
than once) Anyways - I'll let you know if something
similar happens again.
Thanks,
Jeremy Haile
On Wed, 10 Jan 2007 14:22:35 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile" <[EMAIL PROTECTED]> writes:
> > Another random idea - does Pos
I really wish that PostgreSQL supported a "nice" partitioning syntax
like MySQL has.
Here is an example:
CREATE TABLE tr (id INT, name VARCHAR(50), purchased DATE)
PARTITION BY RANGE( YEAR(purchased) ) (
PARTITION p0 VALUES LESS THAN (1990),
PARTITION p1 VALUES LESS THAN (1995),
PARTITIO
Hey Jim -
Thanks for the feedback. The server has dual Xeons with HyperThreading
enabled - so perhaps I should try disabling it. How much performance
boost have you seen by disabling it? Of course, the bottleneck in my
case is more on the I/O or RAM side, not the CPU side.
Jeremy Haile
On
endly syntax in the future similar
to MySQL partitioning support. Having first-class citizen support of
partitions would also allow some nice administrative GUIs and views to
be built for managing them.
Jeremy Haile
On Wed, 10 Jan 2007 15:09:31 -0600, "Jim C. Nasby" <[EMAIL PROTEC
do that usually to lower the scale factors? Is it
ever a good approach to lower the scale factor to zero and just set the
thresholds to a pure number of rows? (when setting it for a specific
table)
Thanks,
Jeremy Haile
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Well - whether or not MySQL's implementation of partitioning has some
deficiency, it sure is a lot easier to set up than PostgreSQL. And I
don't think there is any technical reason that setting up partitioning
on Postgres couldn't be very easy and still be robust.
On Thu, 11 Jan 2007 13:59:20 +01
Running PostgreSQL 8.2.1 on Win32. The query planner is choosing a seq
scan over index scan even though index scan is faster (as shown by
disabling seqscan). Table is recently analyzed and row count estimates
seem to be in the ballpark.
Another tidbit - I haven't done a "vacuum full" ever, alth
worth trying to defragment the drive on a regular basis
in Windows?
Jeremy Haile
On Tue, 16 Jan 2007 16:39:07 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile" <[EMAIL PROTECTED]> writes:
> > Running PostgreSQL 8.2.1 on Win32. The query planner i
"comment";0.0052;29;7885;0.0219167
"archived";0;1;2;0.84623
"response_code";0.9942;4;3;0.905409
"transaction_source";0;4;2;0.983851
"location_dim_id";0;4;86;0.985384
"success";0;4;2;0.981072
Just curious - what does that tell us?
Jeremy Haile
> I still keep wondering if this table is bloated with dead tuples. Even
> if you vacuum often if there's a connection with an idle transaction,
> the tuples can't be reclaimed and the table would continue to grow.
I used to vacuum once an hour, although I've switched it to autovacuum
now. It de
h better with a partitioned table setup.
Also, since I usually delete old data one day at a time, I could simply
drop the old day's partition. This would make vacuuming much less of an
issue.
But I won't be making any changes immediately, so I'll continue to run
tests give
> How much memory does the box have
2GB
> Yes, it takes up space
Well, I upped max_fsm_pages to 200 because it vacuums were failing
with it set to 150. However, I'm now autovacuuming, which might be
keeping my fsm lower. I didn't realize that setting it too high had
negative effects, so
> That's about 32% dead rows. Might be worth scheduling a vacuum full,
> but it's not like I was afraid it might be. It looks to me like you
> could probably use a faster I/O subsystem in that machine though.
I'll try to schedule a full vacuum tonight. As far as I/O - it's using
SAN over fiber.
> It would be nice if the database could
> learn to estimate these values, as newer versions of Oracle does.
That would be really nice since it would take some of the guess work out
of it.
> Yes, cluster would rebuild the table for you. I wouldn't do anything too
> intrusive, run with the random
'm going to reindex the the table tonight.
Jeremy Haile
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
Interesting - I haven't seen that tool before. I'll have to check it
out when I get a chance. Thanks!
On Wed, 17 Jan 2007 20:32:37 +0100, "Tomas Vondra" <[EMAIL PROTECTED]> said:
> > That's about 32% dead rows. Might be worth scheduling a vacuum full,
> > but it's not like I was afraid it migh
> I once had a query which would operate on a recordlist and
> see whether there were any gaps larger than 1 between consecutive
> primary keys.
Would you mind sharing the query you described? I am attempting to do
something similar now.
---(end of broadcast)--
tables have been updated with tens of thousands of inserts
and the table has still not been analyzed (according to
pg_stat_user_tables). Does a scale factor of 0 cause the table to never
be analyzed? What am I doing wrong?
I'm using PG 8.2.1.
Thanks,
Jeremy
> Unless it's just a bug, my only guess is that autovacuum may be getting
> busy at times (vacuuming large tables for example) and hasn't had a
> chance to even look at that table for a while, and by the time it gets
> to it, there have been tens of thousands of inserts. Does that sounds
> pla
asn't watching it *all* the time)
So - as far as I could tell it wasn't running.
On Thu, 18 Jan 2007 16:30:17 -0500, "Tom Lane" <[EMAIL PROTECTED]> said:
> "Jeremy Haile" <[EMAIL PROTECTED]> writes:
> > No tables have been vacuumed or analyzed t
Tilmann Singer wrote:
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [20070728 21:05]:
Let's try putting the sort/limit in each piece of the UNION to speed them up
separately.
SELECT * FROM (
(SELECT * FROM large_table lt
WHERE lt.user_id = 12345
ORDER BY created_at DESC LIMIT 10) AS q1
UNION
(S
statistics?
Thanks,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
is inlining does seem to be
happening...
At this stage I have a work around by putting the query into a plpgsql function
and using dynamic SQL. But it is still frustrating why the planner seems to be
working in a far from optimal fas
is a question for the PostGIS guys and a quick test could tell me
anyway! My memory is that the GIST r-tree index is slow for points at the
moment, and that a good implementation of a kd-tree index over GIST is required
for better speed.
Regards,
Jeremy Palmer
Geodetic Surveyor
National Geo
istribution of response times, rather than "cached" vs.
not?
That a) avoids the issue of discovering what was a cache hit b) deals neatly
with
multilevel caching c) feeds directly into cost estimation.
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performanc
esn't routinely account for all that info
per-process as routine. I'd expect IBM to have equivalent
facilities.
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Index Cond: ((_revision_created > 16) AND
(_revision_created <= 40))
One thought I have is that maybe the
idx_crs_coordinate_revision_expired_created index could be used instead of
idx_crs_coordinate_revision_expired.
Does anyone have
5.872..15.872 rows=43258 loops=1)
Index Cond: ((_revision_created > 16) AND
(_revision_created <= 40))
Total runtime: 14359.747 ms
http://explain.depesz.com/s/qpL says that the bitmap heap scan is bad. Not sure
what to do about it.
Thanks,
Jeremy
-Original Message-
F
ere's the new plan with work_mem = 50mb:
http://explain.depesz.com/s/xwv
And here another plan with work_mem = 500mb:
http://explain.depesz.com/s/VmO
Thanks,
Jeremy
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Monday, 17 January 2011 5:57 p.m.
To: Jeremy
It fits a Data Warehousing type application.
Apart from work_mem, my other parameters are pretty close to these numbers. I
had the work_mem down a little because a noticed some clients were getting out
of memory errors with large queries which involved lots of sorting.
Thanks
Jeremy
Thanks that seems to make the query 10-15% faster :)
Cheers
jeremy
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Tuesday, 18 January 2011 9:24 a.m.
To: Jeremy Palmer
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Possible to improve query plan
Index Cond: ((_revision_created > 16) AND
(_revision_created <= 40))
Total runtime: 985.671 ms
Thanks heaps,
Jeremy
__
This message contains information, which is confidential a
Thanks heaps for the advice. I will do some benchmarks to see how long it takes
to cluster all of the database tables.
Cheers,
Jeremy
-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Sent: Tuesday, 25 January 2011 1:02 p.m.
To: Jeremy Palmer; Tom Lane
Cc
oaded size has become "large".
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ing estimate of such things.
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
In normal circumstances does locking a table in access exclusive mode improve
insert, update and delete operation performance on that table.
Is MVCC disabled or somehow has less work to do?
Cheers
Jeremy
We are a small company looking to put together the most cost effective
solution for our production database environment. Currently in
production Postgres 8.1 is running on this machine:
Dell 2850
2 x 3.0 Ghz Xeon 800Mhz FSB 2MB Cache
4 GB DDR2 400 Mhz
2 x 73 GB 10K SCSI RAID 1 (for xlog and OS)
4
wise would be
appreciated!
Thanks for all of the responses!
On Wed, 15 Feb 2006 14:53:28 -0500, "Ron" <[EMAIL PROTECTED]> said:
> At 11:21 AM 2/15/2006, Jeremy Haile wrote:
> >We are a small company looking to put together the most cost effective
> >solution for our pr
I am running a query that joins against several large tables (~5 million
rows each). The query takes an exteremely long time to run, and the
explain output is a bit beyond my level of understanding. It is an
auto-generated query, so the aliases are fairly ugly. I can clean them
up (rename them)
ngres R3's built-in clustering
support with a SAN, but am interested to know other people's experiences
before we start toying with this possibility. Any experience with the
Ingres support from Computer Associates? Good/bad?
Jeremy
---(end of broadcast)-
igh value for cpu_tuple_cost (e.g.
.5) but this doesn't seem like a wise thing to do.
Your thoughts
appreciated in advance!
-
Jeremy
7+ years
experience in Oracle performance-tuning
relatively new to postgresql
Sorry I should have written that we do VACUUM VERBOSE ANALYZE every
night.
- Jeremy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bill Moran
Sent: Monday, April 12, 2004 12:09 PM
To: [EMAIL PROTECTED]
Cc: Postgresql Performance
Subject: Re: [PERFORM
> "Jeremy Dunn" <[EMAIL PROTECTED]> writes:
> > The question: why does the planner consider a sequential scan to be
> > better for these top 10 values?
>
> At some point a seqscan *will* be better. In the limit, if
> the key being sought is common enoug
ease the two params mentioned yesterday
(effective_cache_size & random_page_cost).
Thanks again for the help!
- Jeremy
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
t a limit.
This is good to know, if it's true. Can anyone confirm?
- Jeremy
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
#x27;, value = pk of your original table. then index the
keyword table on the 'keyword' field, and do your searches from there.
this should improve performance substantially, even on very large return sets,
because the keyword table rows are very small and thus a lot of them fit
pace in each block allocated to that table makes sense.
Conversely, if you know that your data is never going to get updated
(e.g. a data warehousing application), you might specify to pack the
blocks as full as possible. This makes for the most efficient data
retrieval performance.
-
t? Since it appears that users of your site must
login, why not just assign a sessionID to them at login time, and keep
it in the URL for the duration of the session? Then it would be easy to
track where they've been.
- Jeremy
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Paul Lambert wrote:
"-> Merge Join
(cost=35.56..19732.91 rows=12 width=48) (actual time=0.841..2309.828
rows=206748 loops=1)"
I'm no expert, but in the interests of learning: why is the
rows estimate so far out for this join?
Bill Moran wrote:
This is a FAQ, it comes up on an almost weekly basis.
I don't think so. "where".
- select count(*) from gene_prediction_view where gene_ref = 523
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
it though.
A thread maintaining a pool of assigned and cleared pg_clog pages, ahead
of the immediate need?Possibly another job for an existing daemon
thread.
- Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscrip
your operations (and then run analyze)?
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
be 50 rows returned then the estimates from the
planner are way out.
If that doesn't help, we'll need version info, and (if you can afford
the time) an "explain analyze"
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
Scott Carey wrote:
Well, what does a revolution like this require of Postgres? That is the
question.
[...]
#1 Per-Tablespace optimizer tuning parameters.
... automatically measured?
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
I am confused about what the OS is reporting for memory usage on CentOS 5.3
Linux. Looking at the resident memory size of the processes. Looking at the
resident size of all postgres processes, the system should be using around 30Gb
of physical ram. I know that it states that it is using a lot of
But the kernel can take back any of the cache memory if it wants to. Therefore
it is free memory.
This still does not explain why the top command is reporting ~9GB of resident
memory, yet the top command does not suggest that any physical memory is being
used.
On 8/14/09 2:43 PM, "Reid Thomps
[mailto:sc...@richrelevance.com]
Sent: Friday, August 14, 2009 3:38 PM
To: Jeremy Carroll; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Memory reporting on CentOS Linux
On 8/14/09 11:00 AM, "Jeremy Carroll"
wrote:
> I am confused about what the OS is reporting for memory usag
olumn "-/+ buffers/cache:". That shows 46Gb Free RAM.
I cannot be the only person that has asked this question.
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Saturday, August 15, 2009 10:25 AM
To: Jeremy Carroll
Cc: Scott Carey; pgsql-performance@postgresql.
I believe this is exactly what is happening. I see that the TOP output lists a
large amount ov VIRT & RES size being used, but the kernel does not report this
memory as being reserved and instead lists it as free memory or cached.
If this is indeed the case, how does one determine if a PostgreSQ
I'm attempting to implement full-text search and am torn between two techniques:
1) Create multiple GIN indexes on columns I'm going to search against and UNION
the results
or
2) Create one concatenated column GIN index consisting of the columns that will
be searched.
Is there any performance c
are, in the SQL/Codd worldview?
Or, is there more to it?
I appreciate the "Simple Matter Of Programming" problem.
Thanks,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mail
on queries to guide the automatic
creation of indices? Or to set up a partitioning scheme on a previously
monolithic table?
- Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 11/03/2009 07:16 PM, Subbiah Stalin-XCGF84 wrote:
All,
I'm trying to understand the free memory usage and why it falls below
17G sometimes and what could be causing it. Any pointers would be
appreciated.
r...@prod1 # prtconf
System Configuration: Sun Microsystems sun4u
Memory size: 32768 M
On 12/02/2009 11:31 PM, Ashish Kumar Singh wrote:
While importing this dump in to database I have noticed that initially
query response time is very slow but it does improves with time.
Any suggestions to improve performance after dump in imported in to
database will be highly appreciated!
Ana
On 12/24/2009 05:12 PM, Richard Neill wrote:
Of course, with a server machine, it's nearly impossible to use mdadm
raid: you are usually compelled to use a hardware raid card.
Could you expand on that?
- Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
2 rows=1548520 width=338)
(actual time=0.022..2195.527 rows=1551923 loops=1)
Filter: (event_timestamp > (now() - '1 year'::interval))
Total runtime: 6407.377 ms
Needing to use an external (on-disk) sort method, when taking
only 90MB, looks odd.
- Jeremy
--
Sent via pgsql-perfo
On 01/11/2010 02:53 AM, Robert Haas wrote:
On Sun, Jan 10, 2010 at 9:04 AM, Jeremy Harris wrote:
Needing to use an external (on-disk) sort method, when taking
only 90MB, looks odd.
[...]
Well, you'd need to have work_mem> 90 MB for that not to happen, and
very few people can affor
n you have below, you have 3 gigs worth of indexes. Do you
have that much data(in terms of rows)?
-Original Message-
From: Reece Hart [mailto:[EMAIL PROTECTED]
Sent: Wed 7/23/2003 1:07 PM
To: Guthrie, Jeremy
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; SF PostgreSQL
Subject:
d = 100)
I'm running POstgreSQL 9.0.2 on Ubuntu 10.4
Cheers
Jeremy
__
This message contains information, which is confidential and may be subject to
legal privilege.
If you are not the
still in the development and tuning
process, so I will do some analysis of the index stats to see if they are
indeed redundant.
Cheers,
Jeremy
From: Bob Lunney [bob_lun...@yahoo.com]
Sent: Friday, 1 April 2011 3:54 a.m.
To: pgsql-performance@postgresql.org;
t autoanalyze hasn't been taught to
gather those. The manual command on the parent table does gather them,
though.
Is stats-gathering significantly more expensive than an FTS? Could an FTS
update stats as a matter of course (or perhaps only if enough changes in table)?
--
Jeremy
--
Sent via
realise you've had helpful answers by now, but that reads
as 16 hours of cpu time to me; mostly user-mode but with 6 minute
of system-mode. 98% cpu usage for the 16 hours elapsed.
--
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
1 - 100 of 120 matches
Mail list logo