Re: [PERFORM] Serious issues with CPU usage
Hope that you don't find it too distracting, I decided to answer to emails in one go. On Saturday 06 September 2003 03:05, Tom Lane wrote: > indexscans. If you've also got sequential-scan queries, and you're > doing many zillion updates between vacuums, the answer is to vacuum > more often. A decent rule of thumb is to vacuum whenever you've updated > more than about 10% of the rows in a table since your last vacuum. Basically I do this: 1) select about ~700 ID's I have to poll 2) poll them 3) update those 700 rows in that "table" I used (~2700 rows total). And I do this cycle once per minute, so yes, I've got a zillion updates. 700 of 2700 is roughly 25%, so I'd have to vacuum once per minute? The manual actually had a suggestion of vacuuming after big changes, but I didn't think it was that bad. - On Saturday 06 September 2003 12:10, Hannu Krosing wrote: > Could it be that FSM is too small for your vacuum interval ? > > Also, you could try running REINDEX (instead of or in addition to plain > VACUUM) and see if this is is an index issue. VACUUM ANALYZE helped to lessen the load. Not as much as VACUUM FULL, but still bring it down to reasonable level. > 1. What types of queries do you run, and how often ? First, cycle posted above; second, every 5 minutes ~40 SELECTs that include that table. I left the once-per-minute poller offline this weekend, and the CPU usage didn't creep up. > 2. How is your database tuned (postgresql.conf settings) ? shared_buffers = 13000 max_fsm_relations = 10 max_fsm_pages = 100 max_locks_per_transaction = 256 wal_buffers = 64 sort_mem = 32768 vacuum_mem = 16384 fsync = false effective_cache_size = 6 Using these settings I was able to bring CPU usage down to a more reasonable level: http://andri.estpak.ee/cpu1.png This is much better than the first graph (see http://andri.estpak.ee/cpu0.png ), but you can still see CPU usage creeping up. VACUUM FULL was done at 03:00 and 09:00. The small drop at ~12:45 is thanks to VACUUM ANALYZE. If this is the best you can get with postgres right now, then I'll just have to increase the frequency of VACUUMing, but that feels like a hackish solution :( > 3. How much memory does your machine have ? 1 gigabyte. -- andri ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Serious issues with CPU usage
On 8 Sep 2003 at 13:50, Andri Saar wrote: > If this is the best you can get with postgres right now, then I'll just have > to increase the frequency of VACUUMing, but that feels like a hackish > solution :( Use a autovacuum daemon. There is one in postgresql contrib module. It was introduced during 7.4 development and it works with 7.3.x. as well. Current 7.4CVS head has some problems with stats collector but soon it should be fine. Check it out.. Bye Shridhar -- Punishment becomes ineffective after a certain point. Men become insensitive. -- Eneg, "Patterns of Force", stardate 2534.7 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] Serious issues with CPU usage
On Mon, 8 Sep 2003 13:50:23 +0300, Andri Saar <[EMAIL PROTECTED]> wrote: >Basically I do this: >1) select about ~700 ID's I have to poll >2) poll them >3) update those 700 rows in that "table" I used (~2700 rows total). > >And I do this cycle once per minute, so yes, I've got a zillion updates. 700 >of 2700 is roughly 25%, so I'd have to vacuum once per minute? With such a small table VACUUM should be a matter of less than one second: fred=# vacuum verbose t; INFO: --Relation public.t-- INFO: Index t_pkey: Pages 65; Tuples 16384: Deleted 4096. CPU 0.01s/0.10u sec elapsed 0.21 sec. INFO: Removed 4096 tuples in 154 pages. CPU 0.04s/0.02u sec elapsed 0.07 sec. INFO: Pages 192: Changed 192, Empty 0; Tup 16384: Vac 4096, Keep 0, UnUsed 0. Total CPU 0.08s/0.16u sec elapsed 0.36 sec. VACUUM Time: 415.00 ms And this is on a 400 MHz machine under cygwin, so don't worry if you have a real computer. Servus Manfred ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Serious issues with CPU usage
Andri Saar <[EMAIL PROTECTED]> writes: > If this is the best you can get with postgres right now, then I'll just have > to increase the frequency of VACUUMing, but that feels like a hackish > solution :( Not at all. The overhead represented by VACUUM would have to be paid somewhere, somehow, in any database. Postgres allows you to control exactly when it gets paid. It looks to me like throwing a plain VACUUM into your poller cycle (or possibly VACUUM ANALYZE depending on how fast the table's stats change) would solve your problems nicely. Note that once you have that policy in place, you will want to do one VACUUM FULL, and possibly a REINDEX, to get the table's physical size back down to something commensurate with 2700 useful rows. I shudder to think of where it had gotten to before. Routine VACUUMing should hold it to a reasonable size after that. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Serious issues with CPU usage
On Monday 08 September 2003 17:04, Tom Lane wrote: > > It looks to me like throwing a plain VACUUM into your poller cycle > (or possibly VACUUM ANALYZE depending on how fast the table's stats > change) would solve your problems nicely. > I compled the pg_autovacuum daemon from 7.4beta sources as Shridhar Daithankar recommended, and it seems to work fine. At first glance I thought VACUUM is a thing you do maybe once per week during routine administration tasks like making a full backup, but I was wrong. Thanks to all for your help, we can consider this problem solved. Note to future generations: default postgres configuration settings are very conservative and don't be afraid to VACUUM very often. andri ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Serious issues with CPU usage
On 8 Sep 2003 at 17:31, Andri Saar wrote: > Note to future generations: default postgres configuration settings are very > conservative and don't be afraid to VACUUM very often. You should have looked at earlier default postgresql they were arcane by that standard. 7.4 at least attempts to determine the shared_buffers while doing initdb. That results in much better default performance. Bye Shridhar -- Ritchie's Rule: (1) Everything has some value -- if you use the right currency. (2) Paint splashes last longer than the paint job. (3) Search and ye shall find -- but make sure it was lost. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] slow plan for min/max
On Sun, 7 Sep 2003, Pailloncy Jean-Gérard wrote: Asking a question about why max(id) is so much slower than select id order by id desc limit 1, Pailloncy said: > I ask for the same thing. > That's better ! This is a Frequently asked question about something that isn't likely to change any time soon. Basically, Postgresql uses an MVCC locking system that makes massively parallel operation possible, but costs in certain areas, and one of those areas is aggregate performance over large sets. MVCC makes it very hard to optimize all but the simplest of aggregates, and even those optimzations which are possible would wind up being quite ugly at the parser level. You might want to search the archives in the last couple years for this subject, as it's come up quite often. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] slow plan for min/max
On Mon, 2003-09-08 at 11:56, scott.marlowe wrote: > Basically, Postgresql uses an MVCC locking system that makes massively > parallel operation possible, but costs in certain areas, and one of those > areas is aggregate performance over large sets. MVCC makes it very hard > to optimize all but the simplest of aggregates, and even those > optimzations which are possible would wind up being quite ugly at the > parser level. As was pointed out in a thread a couple days ago, MIN/MAX() optimization has absolutely nothing to do with MVCC. It does, however, make optimizing COUNT() more difficult. -Neil ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] slow plan for min/max
[EMAIL PROTECTED] ("scott.marlowe") writes: > On Sun, 7 Sep 2003, Pailloncy Jean-Gérard wrote: > > Asking a question about why max(id) is so much slower than select id order > by id desc limit 1, Pailloncy said: > >> I ask for the same thing. That's better ! > > This is a Frequently asked question about something that isn't > likely to change any time soon. > > Basically, Postgresql uses an MVCC locking system that makes > massively parallel operation possible, but costs in certain areas, > and one of those areas is aggregate performance over large sets. > MVCC makes it very hard to optimize all but the simplest of > aggregates, and even those optimzations which are possible would > wind up being quite ugly at the parser level. MVCC makes it difficult to optimize aggregates resembling COUNT(*) or SUM(*), at least vis-a-vis having this available for a whole table (e.g. - you have to be doing 'SELECT COUNT(*), SUM(SOMEFIELD) FROM THIS_TABLE' with NO "WHERE" clause). But there is nothing about MVCC that makes it particularly difficult to handle the transformation: select max(field) from some_table where another_field < still_another_field; (which isn't particularly efficient) into select field from some_table where another_field < still_another_field order by field desc limit 1; The problems observed are thus: 1. If the query asks for other data, it might be necessary to scan the table to get the other data, making the optimization irrelevant; 2. If there's a good index to key on, the transformed version might be a bunch quicker, but it is nontrivial to determine that, a priori; 3. It would be a fairly hairy optimization to throw into the query optimizer, so people are reluctant to try to do so. Note that MVCC has _nothing_ to do with any of those three problems. The MVCC-related point is that there is reluctance to create some special case that will be troublesome to maintain instead of having some comprehensive handling of _all_ aggregates. It seems a better idea to "fix them all" rather than to kludge things up by fixing one after another. -- let name="cbbrowne" and tld="cbbrowne.com" in name ^ "@" ^ tld;; http://cbbrowne.com/info/lisp.html Signs of a Klingon Programmer - 10. "A TRUE Klingon Warrior does not comment his code!" ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] slow plan for min/max
On Mon, 8 Sep 2003, Neil Conway wrote: > On Mon, 2003-09-08 at 11:56, scott.marlowe wrote: > > Basically, Postgresql uses an MVCC locking system that makes massively > > parallel operation possible, but costs in certain areas, and one of those > > areas is aggregate performance over large sets. MVCC makes it very hard > > to optimize all but the simplest of aggregates, and even those > > optimzations which are possible would wind up being quite ugly at the > > parser level. > > As was pointed out in a thread a couple days ago, MIN/MAX() optimization > has absolutely nothing to do with MVCC. It does, however, make > optimizing COUNT() more difficult. Not exactly. While max(id) is easily optimized by query replacement, more complex aggregates will still have perfomance issues that would not be present in a row locking database. i.e. max((field1/field2)*field3) is still going to cost more to process, isn't it? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] slow plan for min/max
"scott.marlowe" <[EMAIL PROTECTED]> writes: > On Mon, 8 Sep 2003, Neil Conway wrote: >> As was pointed out in a thread a couple days ago, MIN/MAX() optimization >> has absolutely nothing to do with MVCC. It does, however, make >> optimizing COUNT() more difficult. > Not exactly. While max(id) is easily optimized by query replacement, > more complex aggregates will still have perfomance issues that would not > be present in a row locking database. i.e. max((field1/field2)*field3) is > still going to cost more to process, isn't it? Er, what makes you think that would be cheap in any database? Postgres would actually have an advantage given its support for expressional indexes (nee functional indexes). If we had an optimizer transform to convert MAX() into an index scan, I would expect it to be able to match up max((field1/field2)*field3) with an index on ((field1/field2)*field3). regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] slow plan for min/max
"scott.marlowe" <[EMAIL PROTECTED]> writes: > Basically, Postgresql uses an MVCC locking system that makes massively As discussed, uh, a few days ago, this particular problem is not caused by MVCC but by postgres having a general purpose aggregate system and not having special code for handling min/max. Aggregates normally require access to every record they're operating on, not just the first or last in some particular order. You'll note the LIMIT 1/DISTINCT ON work-around works fine with MVCC... -- greg ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] slow plan for min/max
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] ("scott.marlowe") belched out...: > On Mon, 8 Sep 2003, Neil Conway wrote: >> On Mon, 2003-09-08 at 11:56, scott.marlowe wrote: >> > Basically, Postgresql uses an MVCC locking system that makes massively >> > parallel operation possible, but costs in certain areas, and one of those >> > areas is aggregate performance over large sets. MVCC makes it very hard >> > to optimize all but the simplest of aggregates, and even those >> > optimzations which are possible would wind up being quite ugly at the >> > parser level. >> >> As was pointed out in a thread a couple days ago, MIN/MAX() optimization >> has absolutely nothing to do with MVCC. It does, however, make >> optimizing COUNT() more difficult. > > Not exactly. While max(id) is easily optimized by query replacement, > more complex aggregates will still have perfomance issues that would not > be present in a row locking database. i.e. max((field1/field2)*field3) is > still going to cost more to process, isn't it? That sort of MAX() would be difficult to optimize in almost any case, and would mandate doing a scan across the relevant portion of the table... ... Unless you had a functional index on (field1/field2)*field3, in which case it might well be that this would cost Still Less. I still can't fathom what this has to do with MVCC; you have yet to actually connect it with that... -- let name="cbbrowne" and tld="ntlug.org" in String.concat "@" [name;tld];; http://www3.sympatico.ca/cbbrowne/lsf.html "Cars move huge weights at highspeeds by controlling violent explosions many times a second. ...car analogies are always fatal..." -- <[EMAIL PROTECTED]> ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] slow plan for min/max
Scott, > Not exactly. While max(id) is easily optimized by query replacement, > more complex aggregates will still have perfomance issues that would not > be present in a row locking database. i.e. max((field1/field2)*field3) is > still going to cost more to process, isn't it? Sorry, no. The issue has nothing to do with MVCC. It has everything to do with the fact that PostgreSQL allows you to create your own aggregates using functions in any of 11 languages. This forces the planner to treat aggregates as a "black box" which does not allow index utilization, because the planner simply doesn't know what the aggregate is doing internally. To put it another way, the planner sees SUM() or CONCAT() -- which require table scans as they must include all values -- as identical to MAX() and MIN(). Escaping this would require programming a special exception for MAX() and MIN() into the planner and parser. This has been discussed numerous times on HACKERS; the problem is, making special exceptions for MAX() and MIN() would then make it very difficult to implement MAX() or MIN() for new data types, as well as requiring a lot of debugging in numerous places. So far, nobody has been frustrated enough to spend 3 months tackling the problem. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PERFORM] slow plan for min/max
>This is a Frequently asked question about something that isn't likely to >change any time soon. You're right, it is in the FAQ, but pretty well buried. It is entirely non-obvious to most people that min() and max() don't/can't use indices. Something so counterintuitive should be explicitly and prominently advertised, especially since the "order by X limit 1" workaround is so simple. Actually, referring down to later parts of this thread, why can't this optimisation be performed internally for built-in types? I understand the issue with aggregates over user-defined types, but surely optimising max() for int4, text, etc is safe and easy? Of course I may be so far out of my depth as to be drowning, in which case please put me out of my misery. M ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] slow plan for min/max
> Actually, referring down to later parts of this thread, why can't this > optimisation be performed internally for built-in types? I understand the > issue with aggregates over user-defined types, but surely optimising max() > for int4, text, etc is safe and easy? Sorry, missed the bit about user-defined functions. So I should have said built-in functions operating over built-in types. Which does sound more complicated, but anyone redefining max() is surely not in a position to seek sympathy if they lose performance? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] slow plan for min/max
"Matt Clark" <[EMAIL PROTECTED]> writes: > Actually, referring down to later parts of this thread, why can't this > optimisation be performed internally for built-in types? I understand the > issue with aggregates over user-defined types, but surely optimising max() > for int4, text, etc is safe and easy? I can't see that the datatype involved has anything to do with it. None of the issues that come up in making the planner do this are datatype-specific. You could possibly avoid adding some columns to pg_aggregate if you instead hard-wired the equivalent knowledge (for builtin types only) into some code somewhere, but a patch that approached it that way would be rejected as unmaintainable. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[PERFORM]
---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] slow plan for min/max
> "Matt Clark" <[EMAIL PROTECTED]> writes: > > Actually, referring down to later parts of this thread, why can't this > > optimisation be performed internally for built-in types? I > understand the > > issue with aggregates over user-defined types, but surely > optimising max() > > for int4, text, etc is safe and easy? > > I can't see that the datatype involved has anything to do with it. > None of the issues that come up in making the planner do this are > datatype-specific. You could possibly avoid adding some columns > to pg_aggregate if you instead hard-wired the equivalent knowledge > (for builtin types only) into some code somewhere, but a patch that > approached it that way would be rejected as unmaintainable. I don't pretend to have any useful knowledge of the internals of this, so much of what I write may seem like noise to you guys. The naive question is 'I have an index on X, so finding max(X) should be trivial, so why can't the planner exploit that triviality?'. AFAICS the short sophisticated answer is that it just isn't trivial in the general case. Upon rereading the docs on aggregates I see that it really isn't trivial at all. Not even knowing things like 'this index uses the same function as this aggregate' gets you very far, because of the very general nature of the implementation of aggs. So it should be flagged very prominently in the docs that max() and min() are almost always not what 90% of people want to use 90% of the time, because indexes do the same job much better for anything other than tiny tables. Know what we (OK, I) need? An explicitly non-aggregate max() and min(), implemented differently, so they can be optimised. let's call them idx_max() and idx_min(), which completely bypass the standard aggregate code. Because let's face it, in most cases where you regularly want a max or a min you have an index defined, and you want the DB to use it. And I would volunteer to do it, I would, but you really don't want my C in your project ;-) I do volunteer to do some doc tweaking though - who do I talk to? M ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PERFORM] slow plan for min/max
"Matt Clark" <[EMAIL PROTECTED]> writes: > Know what we (OK, I) need? An explicitly non-aggregate max() and min(), > implemented differently, so they can be optimised. Not per se. The way I've been visualizing this is that we add to pg_aggregate a column named, say, aggsortop, with the definition: zero: no index optimization possible not zero: OID of a comparison operator ('<' or '>') A nonzero entry means that the aggregate's value is the same as the first item of the aggregate's input when sorted by the given operator. (So MIN uses the '<' operator for its datatype and MAX uses '>'.) Of course we have to add a clause to CREATE AGGREGATE to allow this to be set for max/min aggregates of user-defined types. But that's just a small matter of programming. This gives us all the type-specific info we need; the aggsortop can be matched against the opclasses of indexes to figure out whether a particular index is relevant to a particular max or min call. The hard part comes in teaching the planner to use this information intelligently. Exactly which queries is it even *possible* to use the transformation for? Which queries is it really a win for? (Obviously it's not if there's no matching index, but even if there is, the presence of WHERE or GROUP BY clauses has got to affect the answer.) How do you structure the resulting query plan, if it's at all complex (think multiple aggregate calls...)? I'm not clear on the answers to any of those questions, so I'm not volunteering to try to code it up ... regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[PERFORM] Quick question
Tom, Back in the 7.0 days, WHERE EXISTS (SELECT * FROM a WHERE condition) was significantly slower on broad tables than WHERE EXISTS (SELECT small_col FROM a WHERE condition) Is this still true, or something that's been fixed in the last 3 versions? Joe Celko is making fun of me because Oracle doesn't have this performance issue. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] Quick question
Josh Berkus <[EMAIL PROTECTED]> writes: > Back in the 7.0 days, > WHERE EXISTS (SELECT * FROM a WHERE condition) > was significantly slower on broad tables than > WHERE EXISTS (SELECT small_col FROM a WHERE condition) > Is this still true, or something that's been fixed in the last 3 versions? It's still true that all the sub-select's output columns will be evaluated. Given that this happens for at most one row, I'm not sure how significant the hit really is. But it's annoying, seeing that the outer EXISTS doesn't care what the column values are. > Joe Celko is making fun of me because Oracle doesn't have this performance > issue. Perhaps Joe can tell us exactly which part of SQL92 says it's okay not to evaluate side-effect-producing functions in the targetlist of an EXISTS subselect. I would like to make the system change the targetlist to just "SELECT 1" in an EXISTS subquery. But I'm slightly concerned about changing the semantics of existing queries. If someone can produce proof that this is allowed (or even better, required) by the SQL spec, it'd be easier... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[PERFORM] Slow query?
Hi All, Is it usual that the following query to take 22 secs with the machine I have? Any other reason? Hope I have provided all the details need. Thanks, Waruna Tables: /* Table structure for table "tvDiary" */CREATE TABLE "tvDiary" ( "member" int4 NOT NULL, "timeSlot" int2 NOT NULL references "timeSlot"("code"), "channel" varchar(4) NOT NULL references "tvChannel"("code"), "date" date NOT NULL, CONSTRAINT "tvDiary_pkey" PRIMARY KEY ("date", "member", "timeSlot"));Indexed on "date" /* Table structure for table "mDiary" */CREATE TABLE "mDiary" ( "member" int4 NOT NULL, "area" char(1) NOT NULL, "district" int2 references "district"("code"), "date" date NOT NULL, CONSTRAINT "mDiary_pkey" PRIMARY KEY ("date", "member"));Indexed on "date" # RecordstvDiary : 7 300 000mDiary : 850 000 machine : Celeron 1.0GHz RAM - 390MB , 40 GB IDE HDDRedHat Linux 9 kernel.shmmni = 4096kernel.shmall = 33554432kernel.shmmax = 134217728 postgres 7.3.4 shared_buffers = 8192sort_mem = 65536 Query: SELECT COUNT(td.member) AS count, td.date AS date, td."timeSlot" AS "timeSlot", td.channel AS channel, tg.district AS district,tg.area AS area FROM "tvDiary" td ,(SELECT DISTINCT(md.member) AS member, md.area AS area, md.district as district FROM "mDiary" md WHERE (md.date BETWEEN '20020301' AND '20020330') ) AS tg WHERE(td.date BETWEEN '20020301' AND '20020330') AND (td.member=tg.member) GROUP BY td.date,td."timeSlot", td.channel,tg.district,tg.area; QUERY PLAN --- Aggregate (cost=91790.44..100942.65 rows=52298 width=28) (actual time=18396.42..21764.44 rows=57478 loops=1) -> Group (cost=91790.44..99635.19 rows=522983 width=28) (actual time=18396.34..21158.23 rows=281733 loops=1) -> Sort (cost=91790.44..93097.90 rows=522983 width=28) (actual time=18396.30..18588.91 rows=281733 loops=1) Sort Key: td.date, td."timeSlot", td.channel, tg.district, tg.area -> Merge Join (cost=34290.10..42116.42 rows=522983 width=28) (actual time=8159.30..10513.62 rows=281733 ops=1) Merge Cond: ("outer".member = "inner".member) -> Sort (cost=29121.48..29755.35 rows=253551 width=17) (actual time=6752.36..6933.38 rows=282552 loops=1) Sort Key: td.member -> Index Scan using d_tvdiary_key on "tvDiary" td (cost=0.00..6362.82 rows=253551 width=17) (actual time=95.80..4766.25 rows=282587 loops=1) Index Cond: ((date >= '2002-03-01'::date) AND (date <= '2002-03-30'::date)) -> Sort (cost=5168.63..5179.26 rows=4251 width=11) (actual time=1406.88..1590.72 rows=281955 loops=1) Sort Key: tg.member -> Subquery Scan tg (cost=4487.31..4912.42 rows=4251 width=11) (actual time=1228.55..1397.20 rows=2348 loops=1) -> Unique (cost=4487.31..4912.42 rows=4251 width=11) (actual time=1228.52..1390.12 rows=2348 loops=1) -> Sort (cost=4487.31..4593.59 rows=42511 width=11) (actual time=1228.51..1257.87 rows=46206 loops=1) Sort Key: member, area, district -> Index Scan using d_mdiary_key on "mDiary" md (cost=0.00..1219.17 rows=42511 width=11) (actual time=60.20..750.67 rows=46206 loops=1) Index Cond: ((date >= '2002-03-01'::date) AND (date <= '2002-03-30'::date)) Total runtime: 21992.24 msec(19 rows)
[PERFORM] Explain Doc
Hi List, Could anyone tell me a documentation that explains the " explain " result and how to analyze it ? Atenciosamente, Rhaoni Chiu Pereira Sistêmica Computadores Visite-nos na Web: http://sistemica.info Fone/Fax : +55 51 3328 1122 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Explain Doc
On Mon, 08-sep-2003 at 16:29, Rhaoni Chiu Pereira wrote: > Could anyone tell me a documentation that explains the " explain " result > and how to analyze it ? > http://archives.postgresql.org/pgsql-performance/2003-09/msg0.php Regards, -- Alberto Caso Palomino Adaptia Soluciones Integrales http://www.adaptia.net [EMAIL PROTECTED] signature.asc Description: Esta parte del mensaje =?ISO-8859-1?Q?est=E1?= firmada digitalmente