Brian Cox writes:
> Tom Lane [...@sss.pgh.pa.us] wrote:
>> OK, so what's the entry for column ts_id?
> Is this what you requested? Brian
Yup. So according to those stats, all ts_id values fall in the range
61 .. 6000250068. It's no wonder it's not
expecting to find anyt
Tom Lane [...@sss.pgh.pa.us] wrote:
OK, so what's the entry for column ts_id?
Is this what you requested? Brian
cemdb=# select * from pg_stats where
tablename='ts_stats_transetgroup_user_daily' and attname = 'ts_id';
schemaname |tablename | attname | null_frac |
avg_
Brian Cox writes:
> Tom Lane [...@sss.pgh.pa.us] wrote:
>> ... Do you have ANALYZE stats for
>> ts_stats_transetgroup_user_daily at all (look in pg_stats)?
> postgres 8.3.5. Yes, here's a count(*) from pg_stats:
> 186
OK, so what's the entry for column ts_id?
regard
Tom Lane [...@sss.pgh.pa.us] wrote:
This seems like kind of a stupid plan anyway (which PG version was this
exactly?) but certainly the big issue is the catastrophically bad
rowcount estimate for the indexscan. Do you have ANALYZE stats for
ts_stats_transetgroup_user_daily at all (look in pg_sta
Brian Cox writes:
> Tom Lane [...@sss.pgh.pa.us] wrote:
>> Um, are you sure that is the query that PID 7397 is running? It doesn't
>> match your previous pg_stat_activity printout, nor do I see anything
>> about partitioning by PKs.
> Umm, indeed. I had to construct the query by hand and left ou
Tom Lane [...@sss.pgh.pa.us] wrote:
Um, are you sure that is the query that PID 7397 is running? It doesn't
match your previous pg_stat_activity printout, nor do I see anything
about partitioning by PKs.
Umm, indeed. I had to construct the query by hand and left out the
partition part. Here's
Brian Cox writes:
> Here's the explain and a current strace and lsof. The strace shows even
> less I/O activity.
> cemdb=# explain select * from ts_stats_transetgroup_user_daily a where
> a.ts_id in (select b.ts_id from ts_stats_transetgroup_user_daily
> b,ts_stats_transet_user_interval c, ts_
Tom Lane [...@sss.pgh.pa.us] wrote:
That is a pretty odd trace for a Postgres backend; apparently it's
repeatedly acquiring and releasing a meg or two worth of memory, which
is not very normal within a single query. Can you tell us more about
the query it's running? An EXPLAIN plan would be par
Bryce Ewing writes:
> So it seems to me that once the index is in memory everything is fine
> with the world, but the loading of the index into memory is horrendous.
So it would seem. What's the disk hardware on this machine?
It's possible that part of the problem is table bloat, leading to th
Brian Cox writes:
> [r...@rdl64xeoserv01 log]# strace -p 7397
> Process 7397 attached - interrupt to quit
> munmap(0x95393000, 1052672) = 0
> munmap(0x95494000, 528384) = 0
> munmap(0x95515000, 266240) = 0
> brk(0x8603000) = 0x8603000
Whit Armstrong writes:
> I have a column which only has six states or values.
> Is there a size advantage to using an enum for this data type?
> Currently I have it defined as a character(1).
Nope. enums are always 4 bytes. char(1) is going to take 2 bytes
(assuming those six values are simple
I have a column which only has six states or values.
Is there a size advantage to using an enum for this data type?
Currently I have it defined as a character(1).
This table has about 600 million rows, so it could wind up making a
difference in total size.
Thanks,
Whit
--
Sent via pgsql-perfor
The nested loops (which are due to the joins) don't seem to be part of
the problem at all. The main time that is taken (actual time that is)
is in this part:
Index Scan using event_20090526_domain_idx on event_20090526 e
(cost=0.00..10694.13 rows=3606 width=1276) (actual
time=50.233..143
There are 4 threads (4 postgres processes) loading all rows from a table
with 50,018 rows. The table has a int8 PK that is incremented by 1 for
each new row and the PK is used by the threads to partition the rows so
that each loads distinct rows. As you can see below, these 4 SELECTs
have been
Yes I analyze after each replication.
Mark Steben│Database Administrator│
@utoRevenue-R- "Join the Revenue-tion"
95 Ashley Ave. West Springfield, MA., 01089
413-243-4800 x1512 (Phone) │ 413-732-1824 (Fax)
@utoRevenue is a registered trademark and a division of Dominion Enterprises
-Origin
>We have two machines. Both running Linux Redhat, both running postgres
8.2.5.
>Both have nearly identical 125 GB databases. In fact we use PITR Recovery
to
>Replicate from one to the other.
I have to ask the obvious question. Do you regularly analyze the machine
you replicate too?
Dave
2009/6/17 Mark Steben :
> A few details – I can always provide more
Could you send:
1. Exact text of query.
2. EXPLAIN ANALYZE output on each machine.
3. VACUUM VERBOSE output on each machine, or at least the last 10 lines.
...Robert
--
Sent via pgsql-performance mailing list (pgsql-performa
Hi, sorry about the blank post yesterday - let's try again
We have two machines. Both running Linux Redhat, both running postgres
8.2.5.
Both have nearly identical 125 GB databases. In fact we use PITR Recovery
to
Replicate from one to the other. The machine we replicate to runs a query
w
Alberto Dalmaso wrote:
> what does it mean using join_collapse_limit = 3
http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscri
Sorry, I missed this reponse.
I'm entirely new to PostgreSQL and have yet to figure out how to use
EXPLAIN ANALYZE on a function. I think I realize where the problem is
though (the loop), I simply do not know how to fix it ;).
Glpk and cbc, thanks, I'll look into those. You're right, the very
nat
That what i send is the quick execution, with other parameters this
query simply doesn't come to an end.
It is the little query that changing the settings (using the default
with all the query analyzer on) becames really quick, while with this
settings (with some analyzer switched off) became very
Alberto Dalmaso writes:
> P.S.: i'm trying with all enable_* to on and pumping to higher values
> from_collapse_limit and join_collapse_limit that I've put to 30.
> The result is that the query, after an hour of work, goes out of memory
> (SQL State 53200)...
Hmm, is that happening during plannin
I promised to provide more details of the query (or the function as it is).
Here goes.
Scenario:
A chemotherapy regimen requires chair time and nursing time. A patient might
sit in the chair for three hours but the nurse only has to be with them for the
first hour. Therefore, nurses can ma
Thanks for the replies everyone. I'll try to answer them all in this one email.
I will send another email immediately after this with additional details about
the query.
> - Frequently the biggest performance gains can be reached by
> a (painful) redesign. Can ou change the table structure in
Alberto Dalmaso wrote:
> P.S.: i'm trying with all enable_* to on and pumping to higher
> values from_collapse_limit and join_collapse_limit that I've put to
> 30.
Tom suggested that you set those numbers higher than the number of
tables joined in the query. I don't think 30 will do that.
>
Alberto Dalmaso wrote:
> Ok, here are the last rows for the vacuum analyze verbose
>
> INFO: free space map contains 154679 pages in 39 relations
> DETAIL: A total of 126176 page slots are in use (including
> overhead).
> 126176 page slots are required to track all free space.
> Current limits
Scott Marlowe writes:
> Without looking at the explain just yet, it seems to me that you are
> constraining the order of joins to insist that the left joins be done
> first, then the regular joins second, because of your mix of explicit
> and implicit join syntax. The query planner is constrained
On Tue, Jun 16, 2009 at 2:35 PM, Hartman,
Matthew wrote:
> Good afternoon.
>
> I have developed an application to efficiently schedule chemotherapy
> patients at our hospital. The application takes into account several
> resource constraints (available chairs, available nurses, nurse coverage
> ass
Tom Lane wrote:
Matthew Wakeling writes:
I'm guessing my next step is to install a version of libc with debugging
symbols?
Yeah, if you want to find out what's happening in libc, that's what you
need.
Getting callgraph information from oprofile would also help. Although it
won't directly t
yes, I have to make that because the data on the table need to be
pivoted so it is joined many times with different filter on the column
that describe the meaning of the column called numeric_value I'm going
to show.
That could be very ineffective, event because that table contains
something like 2
On Wed, Jun 17, 2009 at 8:33 AM, Albe Laurenz wrote:
>
> I don't understand your data model well enough to understand
> the query, so I can only give you general hints (which you probably
> already know):
He is effectively joining same table 4 times in a for loop, to get
result, this is veeery in
Alan Li wrote:
Hi,
It seems that a COPY of 8M rows to a table to 8.4rc1 takes >30% longer
than it does to 8.3.7 on Solaris.
Here are the steps I've taken to reproduce this problem on two different
solaris boxes (Solaris 10 11/06 s10x_u3wos_10 X86 and Solaris 10 8/07
s10x_u4wos_12b X86). I'
Matthew Hartman wrote:
> To determine the available slots, the algorithm finds the earliest slot
> that has an available chair and a count of the required concurrent
> intervals afterwards. So a 60 minute regimen requires 12 concurrent
> rows. This is accomplished by joining the table on itself. A
Alberto Dalmaso wrote:
[...]
> in the explanation I'll see that the db use nasted loop.
[...]
Sorry for the remark off topic, but I *love* the term
"nasted loop". It should not go to oblivion unnoticed.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql
Ok, here are the last rows for the vacuum analyze verbose
INFO: free space map contains 154679 pages in 39 relations
DETAIL: A total of 126176 page slots are in use (including overhead).
126176 page slots are required to track all free space.
Current limits are: 16 page slots, 5000 relation
35 matches
Mail list logo