>I'm curious to see if Aaron can test his structure on 9.3 with the
>original data and WHERE clause and see if the planner still goes for
>the
>terrible plan. If it does, that would seem like an obvious planner
>tweak
>to me.
I will try to spin up a test 9.3 db and
>The PK of the master table and the PK of the detail table cannot be
>the same thing, or they would not have a master-detail relationship.
>One side has to be an FK, not a PK.
>
Of course this is correct. I was trying to make the point that there should be
unique indices (of whatever flavor PG
Will get what you asked for ASAP. Thanks for your time.
--
Aaron
On June 25, 2014 5:55:29 PM EDT, Shaun Thomas wrote:
>On 06/25/2014 04:40 PM, Aaron Weber wrote:
>
>> In the meantime, I guess I wasn't clear about some other particulars
>> The query's where clause
hus, there should be only the explicit matches to the in clause returned, and
if postgresql isn't using the unique index on that column, I would be very
shocked (to the point I would suggest there is a bug somewhere).
An IN with 50 int values took 23sec to return (by way of example).
Thanks
On Thu, Oct 7, 2010 at 2:47 PM, Greg Smith wrote:
> Aaron Turner wrote:
>>
>> Are newer PG versions more memory efficient?
>>
>
> Moving from PostgreSQL 8.1 to 8.3 or later should make everything you do
> happen 2X to 3X faster, before even taking into account
On Thu, Oct 7, 2010 at 12:02 PM, Stephen Frost wrote:
> * Aaron Turner (synfina...@gmail.com) wrote:
>> Basically, each connection is taking about 100MB resident
>
> Errr.. Given that your shared buffers are around 100M, I think you're
> confusing what you see in top wi
of PG pointing at the same files, one
read-only and one read-write with different memory profiles, so I
assume my only real option is throw more RAM at it. I don't have $$$
for another array/server for a master/slave right now. Or perhaps
tweaking my .conf file? Are newer PG versions
Thanks for the response kevin. Answers interspersed below.
On 8/17/10 10:18 AM, "Kevin Grittner" wrote:
> "Aaron Burnett" wrote:
>
>> Postgres Version 8.25
>
> Do you mean 8.2.5? (There is no PostgreSQL version 8.25.)
>
Yeah, missed a
From: Mark Rostron [mailto:mrost...@ql2.com]
Sent: Mon 8/16/2010 9:51 PM
To: Aaron Burnett; pgsql-performance@postgresql.org
Subject: RE: Very poor performance
This is weird - is there a particular combination of memberid/answered in
answerselectindex that has a very high rowcount?
First
is a slave to a slony replicated
master/slave setup.
Again, apologies if the formatting got munged, the attached text file has the
same info.
Thanking you in advance for any help and suggestions.
Aaron
explain analyze select distinct(id) from member where id in (select memberid
from answersele
On Tue, Jun 16, 2009 at 6:36 PM, Robert Haas wrote:
> On Tue, Jun 16, 2009 at 9:23 PM, Aaron Turner wrote:
>> DELETE FROM muapp.pcap_store AS x
>> USING muapp.pcap_store AS a
>> LEFT JOIN muapp.pcap_store_log b ON a.pcap_storeid =
>> b.pcap_s
On Tue, Jun 16, 2009 at 6:36 PM, Robert Haas wrote:
> On Tue, Jun 16, 2009 at 9:23 PM, Aaron Turner wrote:
>> DELETE FROM muapp.pcap_store AS x
>> USING muapp.pcap_store AS a
>> LEFT JOIN muapp.pcap_store_log b ON a.pcap_storeid =
>> b.pcap_s
On Tue, Jun 16, 2009 at 5:30 PM, Robert Haas wrote:
> On Tue, Jun 16, 2009 at 7:39 PM, Aaron Turner wrote:
>> On Tue, Jun 16, 2009 at 2:37 PM, Alvaro
>> Herrera wrote:
>>> Aaron Turner escribió:
>>>> I'm trying to figure out how to optimize th
On Tue, Jun 16, 2009 at 2:37 PM, Alvaro
Herrera wrote:
> Aaron Turner escribió:
>> I'm trying to figure out how to optimize this query (yes, I ran
>> vacuum/analyze):
>>
>> musecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid
>
E CASCADE
As you see, the sequence scan on pcap_store is killing me, even though
there appears to be a perfectly good index. Is there a better way
construct this query?
Thanks,
Aaron
--
Aaron Turner
http://synfin.net/
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Win
On Thu, Mar 5, 2009 at 12:58 PM, Joshua D. Drake wrote:
> What happens if you do this:
>
> SET cpu_tuple_cost TO '0.5';
> SET cpu_index_tuple_cost TO '0.5';
> EXPLAIN ANALYZE 8.3 query
>
> Next try this:
>
> SET cpu_tuple_cost TO '0.5';
> SET cpu_index_tuple_cost TO '0.5';
> SET seq_page_cost
On Thu, Mar 5, 2009 at 12:58 PM, Joshua D. Drake wrote:
> What happens if you do this:
>
> SET cpu_tuple_cost TO '0.5';
> SET cpu_index_tuple_cost TO '0.5';
> EXPLAIN ANALYZE 8.3 query
>
Right now, I'm getting very good results with the above. I'm still running
additional tests but I'll keep
Matching query plans with numerics changed to integers.
I sent the wrong query plans earlier
8.3.3: 1195 ms
8.2.12: 611 ms
2nd part of table descriptions
Table "public.company"
Column|Type | Modifiers
--+-+
id | integer | not null
Query and first part of the table descriptions
Query:
explain analyze
select distinct on
(t8.id) t8.id,
t8.payment_id,
t8.amount_id,
t8.active,
t8.or
On Tue, Mar 3, 2009 at 5:34 PM, Tom Lane wrote:
> Comparisons on
> numerics aren't terribly fast though (in either release). I wonder
> whether you could change the key columns to int or bigint.
I changed the affected columns from numeric to integers and I was unable to
get any performance gai
On Tue, Mar 3, 2009 at 12:38 PM, wrote:
> if you haven't done a vaccum analyse on either installation then postgres'
> idea of what sort of data is in the database is unpredictable, and as a
> result it's not surprising that the two systems guess differently about what
> sort of plan is going to
On Mon, Mar 2, 2009 at 10:23 PM, Tom Lane wrote:
> Are you sure you are comparing apples to apples here? Same configure
> options for the builds, same parameter values in postgresql.conf, both
> databases ANALYZEd, etc? And are they running on the same hardware?
>
Thank you for looking at this
Hi,
We are currently running postgres 8.2 and are evaluating the upgrade to 8.3.
Some of our tests are indicating that postgresql 8.3 is actually degrading
the
performance of some of our queries by a factor of 10 or more. The queries
in
question are selects that are heavy on joins (~10 tables) w
a problem, but if the sender is stalling because it has a
small window, waiting for an ack to be received that could cause a
large slow down.
Do the ack's include any data? If so it's indicative of the PG
networking protocol overhead- probably not much you can do about that.
Without looking
ving your DB box 5 hops away is going to add a lot of latency
and any packet loss is going to kill TCP throughput- especially if you
increase window sizes. I'd recommend something like "mtr" to map the
network traffic (make sure you run it both ways in case you have an
asymmetric routing
55 PM
To: Parks, Aaron B.
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Intermitent slow queries
Among other possibilities, there's a known problem with slow memory
leaks in various JVM's under circumstances similar to those you are
describing.
The behavior you are desc
ay, May 02, 2007 2:18 PM
To: Parks, Aaron B.
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Intermitent slow queries
On 2-May-07, at 11:24 AM, Parks, Aaron B. wrote:
My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram
running RHEL4 is acting kind of odd and I
My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram
running RHEL4 is acting kind of odd and I thought I would see if anybody
has any hints.
I have Java program using postgresql-8.1-409.jdbc3.jar to connect over
the network. In general it works very well. I have run batch upda
though
the value of t28.s (as determined by the inner index scan where t28.o
= 'spec') could(should?) theoretically be known to it. If it did, then
I imagine it would realize that a nested loop join starting with t1.s
= t28.s (which is very selective) would be much cheaper than doing
t1.s = t28.s to the outer node of the left join..
but so far, I cannot. So, my questions are:
1) Is my 'desired' query plan logic correct
2) Can the executor execute a plan such as my 'desired' plan
3) If (1) and (2) are 'yes', then how may I get the planner to
most of the settings in the postgresql.conf actually
dropped performance significantly. Looks like I'm starving the disk
cache.
4) I'm going to assume going to a bytea helped some (width is 54 vs
66) but nothing really measurable
Thanks everyone for your help!
--
Aaron Turner
ht
On 2/12/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> Aaron Turner <[EMAIL PROTECTED]> writes:
> > Well before I go about re-architecting things, it would be good to
> > have a strong understanding of just what is going on. Obviously, the
> > unique index on the char
the transaction due to drop index.
Yep. In my case it's not a huge problem right now, but I know it will
become a serious one sooner or later.
Thanks a lot Marc. Lots of useful info.
--
Aaron Turner
http://synfin.net/
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 2/11/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> On Fri, Feb 10, 2006 at 09:24:39AM -0800, Aaron Turner wrote:
> > On 2/10/06, Matthew T. O'Connor wrote:
> > > Aaron Turner wrote:
> >
> > Basically, I need some way to optimize PG so that I don
On 2/10/06, Matthew T. O'Connor wrote:
> Aaron Turner wrote:
> > So I'm trying to figure out how to optimize my PG install (8.0.3) to
> > get better performance without dropping one of my indexes.
>
> What about something like this:
>
> begin;
> drop s
On 2/10/06, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote:
> On 2/10/06, Aaron Turner <[EMAIL PROTECTED]> wrote:
> > So I'm trying to figure out how to optimize my PG install (8.0.3) to
> > get better performance without dropping one of my indexes.
> > B
0rpm) for WAL
other then throwing more spindles at the problem, any suggestions?
Thanks,
Aaron
--
Aaron Turner
http://synfin.net/
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
hat
the one CPU that was pegged at near 100%, leading me to believe it
was CPU bound. Odd thing is that the same operation on a 2GHz Pentium
IV box (Linux) on the same data took about a day.Truss showed that
a great majority of that time was in userland.
-Aaron
difference over the set difference of the
operation match.
/Aaron
On Thu, 21 Oct 2004 17:34:17 +0300, Victor Ciurus <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I'm writing this because I've reached the limit of my imagination and
> patience! So here is it...
>
> 2
The intuitive thing would be to put pg into a file system.
/Aaron
On Thu, 21 Oct 2004 12:44:10 +0200, Leeuw van der, Tim
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> I guess the difference is in 'severe hacking inside PG' vs. 'some unknown amount of
> hacking that doe
If anyone is going to take the train all the way, please e-mail me
offline. There is a train station relatively close to the event (NY to
Philly then the R5 to Malvern), but it's not within walking distance, so
we'll figure out some way to pick people up from there.
Thanks,
to
meet someone.
/Aaron
On Tue, 19 Oct 2004 14:43:29 -0400, Max Baker <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 13, 2004 at 12:21:27PM -0400, Aaron Mulder wrote:
> > All,
> > My company (Chariot Solutions) is sponsoring a day of free
> > PostgreSQL training by
).
Also - don't forget that just rebuilding a database cleanly can dramatically
improve performance.
The only dbms I know that indexes views is MS SQL Server 2000, where it is a
limited form of materialized queries. pg doesn't do MQs, but check out
functional indices.
/Aaron
such as image processing that has major architectural
risk including an order of magnitude more semaphores, but can reduce
some extra block copies)
mmap doesn't look that promising.
/Aaron
- Original Message -
From: "Kevin Brown" <[EMAIL PROTECTED]>
To: <[EMAIL PR
On Wed, 13 Oct 2004 09:23:47 -0700, Bryan Encina
<[EMAIL PROTECTED]> wrote:
>
> Wow, that's good stuff, too bad there's no one doing stuff like that in the
> Los Angeles area.
>
> -b
That makes two of us. Hanging out with Tom, Bruce, and others at OSCON
2002 was one of the most informative and fun
ning
Registration is required, and space is limited. The location is
Malvern, PA (suburb of Philadelphia) and it's on Saturday Oct 30. For
more information or to register, see
http://chariotsolutions.com/postgresql.jsp
Thanks,
Aaron
---(end of
Makes sense. See DB2 8.2 info on their new implementation of cross column
statistics. If this is common and you're willing to change code, you can
fake that by adding a operation index on some hash function of both columns,
and search for both columns and the hash.
- Original Message -
Fr
- Original Message -
From: "Gabriele Bartolini" <[EMAIL PROTECTED]>
To: "Aaron Werman" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Thursday, October 07, 2004 1:07 PM
Subject: Re: [PERFORM] Data warehousing requirements
> At 13.30 07/10/2004
w volume if
you can pay the write performance penalty. To size hardware you need to
estimate load in terms of transaction type (I usually make bucket categories
of small, medium, and large effort needs) and transaction rate. Then try to
estimate how much CPU and I/O they'll use.
/Aaron
"
rough
> tight integration with the front-end application.
This looks like you're building an object store to support a custom app that
periodically or on demand pulls rdbms data mart data. The description of the
use seems either static, suggesting summary tables or dynamic, suggesting
that
Some quick notes:
- Using a side effect of a function to update the
database feels bad to me
- how long does the SELECT into varQueryRecord
md5(upc.keyp
function take / what does it's explain look
like?
- There are a lot of non-indexed columns on that
delta master table, such as key
or a big table loaded monthly.)
- Original Message -
From: "Jeff" <[EMAIL PROTECTED]>
To: "Mitch Pirtle" <[EMAIL PROTECTED]>
Cc: "Aaron Werman" <[EMAIL PROTECTED]>; "Scott Kirkwood"
<[EMAIL PROTECTED]>; "Neil Conway" <[EMAI
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Aaron Werman" <[EMAIL PROTECTED]>
Cc: "Iain" <[EMAIL PROTECTED]>; "Jim C. Nasby" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Tuesday, September 28, 20
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Iain" <[EMAIL PROTECTED]>
Cc: "Jim C. Nasby" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, September 27, 2004 11:17 PM
Subject: Re: [PERFORM] Caching of Queries
> "Iain" <[EMAIL PROTECTED]> writes:
> > I can only te
stored *result* is returned. The point is that a lot of
systems do lots of static queries, such as a pick list on a web page - but
if the data changes the prior result is returned. This (plus a stable jdbc
driver) was the reason MySQL did well in the eWeek database comparison.
/Aaron
From: "Harald Lau (Sector-X)" <[EMAIL PROTECTED]>
...
> From: "Mischa Sandberg" <[EMAIL PROTECTED]>
>
> > If your company is currently happy with MySQL, there probably are
> > other (nontechnical) reasons to stick with it. I'm impressed that
> > you'd consider reconsidering PG.
>
> I'd like to seco
tions
it isn't appreciated: the users are used to the old system quirks, and the
improvements only leave them uncomforable since the system "acts
differently". (I've broken the rule on occation for standardization
conversions.)
My expectation is that pg will not get a fair shake
You got it.. 7.3 (should have mentioned that). We're planning to
upgrade to 8.0 anyway in the future, so it's good to know. Thanks!
-Aaron
On Wed, 18 Aug 2004 17:39:21 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
> Aaron Birkland <[EMAIL PROTECTED]> writes:
> &g
Hi,
I noticed an interesting phenomenon when loding (COPY) some tables
from a file. For some reason, one load was taking longer than I
assumed it would. As it turns out, one of the columns was an array
containing elements that were of a user defined type. Using strace
(on linux) and truss (on
Title: Message
The words for the keyword can be made up of a
sentace, ie 10 or more keywords to one entry.
Also incase I didnt answer before, we are using
TSearch2 and all tables have been fully analyzed and indexed.
Any other suggestions?
How long do searches take when 10 000 rows are
return
tions into trouble during
stress: bulk loads, etc. Most DBAs end up on RAID 10 for it's predictability
and performance.
/Aaron
- Original Message -
From: "Alan Stange" <[EMAIL PROTECTED]>
To: "Joseph Shraibman" <[EMAIL PROTECTED]>
Cc: "J. Andrew R
Your second server has queuing (load averages are highish), only 2 processes
running, and almost all cycles are idle. You need to track down your
bottleneck. Have you looked at iostat/vmstat? I think it would be useful to
post these, ideally both before and after full vacuum analyze.
/Aaron
By definition, it is equivalent to:
SELECT t1.id, t2.url FROM referral_temp t2 LEFT /*OUTER*/ JOIN d_referral t1
ON t2.url = t1.referral_raw_url
union all
SELECT null, url FROM referral_temp WHERE url is null
ORDER BY 1;
/Aaron
- Original Message -
From: "Joe Conway&quo
we're talking about 30G and all modern
FSs support >2G files.
My tendancy would be to stay on ext3, since it is
the default RH FS. I would review site preference and the SAN recommended FS and
see if they add any compelling points.
/Aaron
- Original Message -
From
so useful in showing stress in your system only when it is
being stressed.
Only partly in jest,
/Aaron
BTW - I am truly curious about what happens to your system if you use
separate RAID 0+1 for your logs, disk sorts, and at least the most active
tables. This should reduce I/O load by an order of
lly recoverable processes, such as
loading an empty server from a file.
/Aaron
- Original Message -
From: "Richard Huxton" <[EMAIL PROTECTED]>
To: "huang yaqin" <[EMAIL PROTECTED]>; "Tom Lane" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
- Original Message -
From: "Josh Berkus" <[EMAIL PROTECTED]>
To: "Aaron Werman" <[EMAIL PROTECTED]>; "Qing Zhao" <[EMAIL PROTECTED]>;
"Tom Lane" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, April 06, 2
without buying hardware.
Conceptually, an RDBMS converts slow random I/O into in memory processing
and sequential logging writes. If successful, it should reduce the I/O
overhead.
/Aaron
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, pl
onal
validation outside the dbms (and the apps behave therefore like object
databases and surrogate keys are network database pointers) and in data
warehousing (where downstream data cannot be corrected anyway).
/Aaron
- Original Message -
From: "Leeuw van der, Tim" <[EMAIL PROTE
Almost any cross dbms migration shows a drop in performance. The engine
effectively trains developers and administrators in what works and what
doesn't. The initial migration thus compares a tuned to an untuned version.
/Aaron
- Original Message -
From: "Josh Berkus" &l
leading columns?
/Aaron
- Original Message -
From: "Josh Berkus" <[EMAIL PROTECTED]>
To: "Palle Girgensohn" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Thursday, April 01, 2004 7:35 PM
Subject: Re: [PERFORM] single index on more than two coulumns a ba
Are you talking about
http://www.potentialtech.com/wmoran/postgresql.php#conclusion
- Original Message -
From: "Subbiah, Stalin" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; "Matt Clark" <[EMAIL PROTECTED]>; "Subbiah, Stalin"
<[EMAIL PROTECTED]>; "'Andrew Sullivan'" <[EMAIL PROTECTED]>;
<[
Quick observations:
1. We have an explanation for what's going on, based on the message being
exactly 666 lines long :-)
2. I'm clueless on the output, but perhaps Tom can see something. A quick
glance shows that the strace seemed to run 27 seconds, during which it did:
count| call
---|---
'll try to put together a
sample of a slow mass join update.
/Aaron
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Rosser Schwarz" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, March 15, 2004 7:08 PM
Subject: Re: [PERF
run and no index
overhead. Then alter/rename, add indexes and whatever else hangs off the
table (or if you're lazy do an insert/select into the original target
table). I often see 2 orders of magnitude improvement doing this, and no
need to vacuum.
/Aaron
- Original Message -
From:
MS's are the prime beneficiaries of the drive speed
improvements - since logging, backups, and copies are about the only things
(ignoring bad perl scripts and find commands) that generate loads of 50+
mB/sec.
/Aaron
---(end of broadcast)---
TIP 8: explain analyze is your friend
I'd look at adding more disks first. Depending on what
type of query
load you get, that box sounds like it will be very
much I/O bound
Given a a 13G database on a 12G system, with a low
growth rate, it is likely that there is almost no I/O
for most activities. The exception is checkpointi
78 matches
Mail list logo