On Thu, Feb 25, 2010 at 6:29 PM, Yu-Ju Hong wrote:
> Thanks for the reply.
>
> On Thu, Feb 25, 2010 at 5:48 PM, Greg Smith wrote:
>>
>> Yu-Ju Hong wrote:
>>>
>>> 2. Moreover, the disk utilization was high and the "await" time from
>>> iostat is around 500 ms. Could disk I/O limit the overall thro
On Sun, Feb 28, 2010 at 3:19 PM, Josh Berkus wrote:
> All,
>
> I'm seeing in a production database two problems with query rowcount
> estimation:
>
> (1) Estimates for the number of rows in an outer join do not take into
> account any constraint exclusion (CE) in operation.
>
> (2) Row estimates d
On Fri, Feb 26, 2010 at 6:24 PM, Tory M Blue wrote:
> Ya my boxes are pretty well stacked, but a question. How does one get
> the memory usage of a query. You state to look at explain analyze but
> this gives timing and costs, but is one of the numbers memory or do I
> have to take values and do s
On Wed, Mar 3, 2010 at 4:45 PM, Josh Berkus wrote:
>> I feel like I've seen these way-too-high row estimates in some other
>> postings to -performance, but I'm not sure if it was the same issue.
>> You don't by chance have a RTC? I don't think it's likely fixed in 9.0
>> but it would be interestin
On Mon, Mar 1, 2010 at 2:29 PM, Anj Adu wrote:
> When I use intervals in my query e.g col1 between current_timestamp -
> interval '10 days' and current_timestamp...the optimizer checks ALL
> partitions whereas if I use col1 between 2 hardcoded dates..only
> the applicable partitions are scanne
On Sun, Mar 7, 2010 at 12:57 PM, Josh Berkus wrote:
>>> Yeah, I can generate one pretty easily; the behavior is readily
>>> observable and repeatable. Will get on it RSN, but at you said, we're
>>> not doing anything about it for 9.0.
>
> Well, I can generate a test case, but on examination it tu
On Wed, Mar 10, 2010 at 6:04 AM, Yeb Havinga wrote:
> sverha...@wps-nl.com wrote:
>>
>> > Thanks - I'm sorry that I was not more specific earlier, but what would
>> > be *really* helpful is the output of explain analyze, since that also
>> > shows actual time, # rows and # loops of the inner nestl
On Wed, Mar 10, 2010 at 5:37 PM, Tom Lane wrote:
> Robert Haas writes:
>> It does seem like once the materialize step is done we could notice
>> that the tuplestore is empty and, given that uses no outer variables
>> or parameters and therefore will never be re-executed,
On Sun, Mar 14, 2010 at 4:01 AM, David Newall
wrote:
> an expected 40 - 45GB of compressed output. CPU load is 100% on the core
> executing pg_dump, and negligible on all others cores. The system is
> read-mostly, and largely idle. The exact invocation was:
>
> nohup time pg_dump -f databas
On Mon, Mar 15, 2010 at 11:58 AM, Matthew Wakeling wrote:
> On Thu, 25 Feb 2010, Bruce Momjian wrote:
>>
>> Was there every any conclusion on this issue?
>
> Not really. Comments inline:
>
>> Matthew Wakeling wrote:
>>>
>>> Revisiting the thread a month back or so, I'm still investigating
>>> perf
On Wed, Mar 17, 2010 at 9:01 PM, Eger, Patrick wrote:
> I'm running 8.4.2 and have noticed a similar heavy preference for
> sequential scans and hash joins over index scans and nested loops. Our
> database is can basically fit in cache 100% so this may not be
> applicable to your situation, but t
On Wed, Mar 24, 2010 at 8:59 PM, Eger, Patrick wrote:
> Ok, the wording is a bit unclear in the documentation as to whether it is the
> cost for an entire *page* of tuples, or actual tuples. So something like the
> following might give better results for a fully-cached DB?
>
> seq_page_cost = 1.
On Mon, Mar 22, 2010 at 6:19 PM, Carlo Stonebanks
wrote:
> Thanks for the insight. How much more of a server's resources will be
> consumed by an ANALYZE with default_statistics_target = 100?
I don't think it will be much of a problem, especially since
autovacuum will do only the tables that need
On Thu, Mar 25, 2010 at 3:57 PM, Faheem Mitha wrote:
>
> Hi everyone,
>
> I've been trying to reduce both memory usage and runtime for a query.
> Comments/suggestions gratefully received. Details are at
>
> http://bulldog.duhs.duke.edu/~faheem/snppy/opt.pdf
>
> See particularly Section 1 - Backgro
On Mon, Mar 29, 2010 at 2:31 PM, Faheem Mitha wrote:
>> It's not really too clear to me from reading this what specific
>> questions you're trying to answer.
>
> Quote from opt.{tex/pdf}, Section 1:
>
> "If I have to I can use Section~\ref{ped_hybrid} and
> Section~\ref{tped_hybrid}, but I am left
On Mon, Mar 29, 2010 at 2:00 AM, Tadipathri Raghu wrote:
> I have noticed one more thing here, that if you turn off the fsync and try
> to run the transaction than its breaking the currnet filenode and generating
> another filenode. Is it true that whenever you turn off or on the fsync the
> filen
On Fri, Mar 26, 2010 at 7:57 PM, Richard Yen wrote:
> Note that it is constantly paging in, but never paging out. This would
> indicate that it's constantly reading from swap, but never writing out to it.
> Why would postgres do this? (postgres is pretty much the only thing running
> on this
On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha wrote:
> Sure, but define sane setting, please. I guess part of the point is that I'm
> trying to keep memory low, and it seems this is not part of the planner's
> priorities. That it, it does not take memory usage into consideration when
> choosing a
On Wed, Mar 31, 2010 at 6:10 AM, Faheem Mitha wrote:
>
> [If Kevin Grittner reads this, please fix your email address. I am getting
> bounces from your email address.]
>
> On Tue, 30 Mar 2010, Robert Haas wrote:
>
>> On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha
>
On Wed, Mar 31, 2010 at 4:37 PM, Scott Carey wrote:
> On Mar 27, 2010, at 6:35 AM, Andy Colson wrote:
>>
>> Dont "VACUUM FULL", its not helping you, and is being removed in newer
>> versions.
>>
>
> Off topic: How is that going to work? CLUSTER doesn't work on tables
> without an index. I wou
On Thu, Apr 1, 2010 at 2:15 PM, Faheem Mitha wrote:
> I had set the foreign keys in question (on the geno table) to be primary
> keys. This is because this setup is basically a glorified spreadsheet, and I
> don't want more than one cell corresponding to a particular tuple of
> idlink.id and anno.
On Fri, Apr 2, 2010 at 3:15 PM, Christiaan Willemsen
wrote:
> About a year ago we setup a machine with sixteen 15k disk spindles on
> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
> we want to move away (we are more familiar with Linux anyway).
>
> So the plan is to
On Sun, Apr 4, 2010 at 4:52 PM, Robert Haas wrote:
> On Fri, Apr 2, 2010 at 3:15 PM, Christiaan Willemsen
> wrote:
>> About a year ago we setup a machine with sixteen 15k disk spindles on
>> Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
>>
On Thu, Apr 1, 2010 at 9:47 AM, raghavendra t wrote:
> and deletes. We also has the weekly maintance of VACUUM, but still reindex
> takes lot of time.
If you only VACUUM once a week, *everything* is going to take a lot of time.
...Robert
--
Sent via pgsql-performance mailing list (pgsql-perfor
On Fri, Apr 2, 2010 at 2:19 PM, Joel Jacobson wrote:
> Is this a bug? I'm using version 8.4.1.
It's not really a bug, but it's definitely not a feature either.
> Limit (cost=0.00..43.46 rows=1 width=4) (actual time=1023.213..1023.214
> rows=1 loops=1)
> -> Index Scan using transactions_pke
On Tue, Apr 6, 2010 at 6:30 PM, Joel Jacobson wrote:
> Actually, swapping the order of the conditions did in fact make some
> difference, strange.
>
> I ran the query a couple of times for each variation to see if the
> difference in speed was just a coincidence or a pattern. Looks like the
> spe
On Tue, Apr 6, 2010 at 10:32 PM, Craig Ringer
wrote:
> On 7/04/2010 12:24 AM, Brian Cox wrote:
>>
>> On 04/06/2010 01:18 AM, Craig Ringer [cr...@postnewspapers.com.au] wrote:
>>>
>>> I'm wondering if the issue is with strace rather than Pg. That is to
>>> say, that strace is trying to print:
>>
>>
On Tue, Apr 6, 2010 at 5:37 PM, Samuel Gendler
wrote:
> In part, I'm surprised that the index scan takes as long as it does,
> since I'd think an index would be able to return the set of keys
> relatively quickly. But that's a secondary issue.
We don't actually have a facility built into the ind
On Wed, Apr 7, 2010 at 6:56 PM, David Rees wrote:
>> max_fsm_pages = 1600
>> max_fsm_relations = 625000
>> synchronous_commit = off
>
> You are playing with fire here. You should never turn this off unless
> you do not care if your data becomes irrecoverably corrupted.
That is not correct.
On Wed, Apr 7, 2010 at 10:50 PM, David Rees wrote:
> On Wed, Apr 7, 2010 at 7:06 PM, Craig James
> wrote:
>> On 4/7/10 5:47 PM, Robert Haas wrote:
>>> On Wed, Apr 7, 2010 at 6:56 PM, David Rees wrote:
>>>>> synchronous_commit = off
>>>>
>&g
2010/4/8 Merlin Moncure :
> previous to 8.2, to get good performance on zabbix you need to
> aggressively vacuum the heavily updated tables yourself.
Generally if you DON'T vacuum aggressively enough, then vacuums will
take a really long and painful amount of time, perhaps accounting for
the "hang
On Tue, Apr 6, 2010 at 8:42 PM, norn wrote:
> I have some mysterious slow downs with ORDER BY and LIMIT. When LIMIT
> getting greater than some value (greater than 3 in my case), query
> takes 4-5 secs instead of 0.25ms. All of the necessary indexes are in
> place. I have no idea what to do, so an
On Sat, Apr 10, 2010 at 4:47 PM, Ben Chobot wrote:
> My understanding is that this generally happens because the plan should be
> different for the different values, but the first time the function is run it
> caches the plan for one of the values and will never use the appropriate plan
> for t
On Tue, Apr 20, 2010 at 1:39 PM, David Kerr wrote:
> Howdy all,
>
> I've got a huge server running just postgres. It's got 48 cores and 256GB of
> ram. Redhat 5.4, Postgres 8.3.9.
> 64bit OS. No users currently.
>
> I've got a J2EE app that loads data into the DB, it's got logic behind it so
> i
On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
> that thought occured to me while I was testing this. I ran a vacuumdb -z
> on my database during the load and it didn't impact performance at all.
The window to run ANALYZE usefully is pretty short. If you run it
before the load is complete, y
On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka wrote:
> The reason this is not done is that the mechanism used for fetching a piece
> of the results at a time can change the query plan used if using a
> PreparedStatement. There are three ways to plan a PreparedStatement:
>
> a) Using the exact param
I think it's really tough to say how this is going to perform. I'd
recommend constructing a couple of simplified test cases and
benchmarking the heck out of it. One of the problems with temporary
tables is that every time you create a temporary table, it creates a
(temporary) record in pg_class;
On Wed, Apr 21, 2010 at 11:06 AM, Rick wrote:
> I have a DB with small and large tables that can go up to 15G.
> For performance benefits, it appears that analyze has much less cost
> than vacuum, but the same benefits?
Err, no. ANALYZE gathers statistics for the query planner; VACUUM
clears out
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
> I don't think this is just an issue with statistics, because the same
> problem arises when I try executing a query like this:
I'm not sure how you think this proves that it isn't a problem with
statistics, but I think what you should be fo
On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
wrote:
> 2010/4/23 Robert Haas :
>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
>>> I don't think this is just an issue with statistics, because the same
>>> problem arises when I try executing a query l
On Fri, Apr 23, 2010 at 3:22 PM, Cédric Villemain
wrote:
> 2010/4/23 Robert Haas :
>> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
>> wrote:
>>> 2010/4/23 Robert Haas :
>>>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov
>>>> wrote
On Fri, Apr 23, 2010 at 4:42 PM, Eliot Gable
wrote:
> And, from these tests, it would be significant overhead.
Yeah, I've been very disappointed by the size of the function-call
overhead on many occasions. It might be worth putting some effort
into seeing if there's anything that can be done abo
On Fri, Apr 23, 2010 at 6:53 PM, Tom Lane wrote:
> Robert Haas writes:
>> Hmm. We currently have a heuristic that we don't record a value as an
>> MCV unless it's more frequent than the average frequency. When the
>> number of MCVs is substantially smaller than t
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
wrote:
> In the first query, the planner doesn't use the information of the 2,3,4.
> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
> say 3, but it doesn't)
> So it divide the estimated number of rows in the t2 table by 5
> (di
On Thu, Apr 22, 2010 at 4:42 PM, Rick wrote:
> On Apr 22, 2:55 pm, robertmh...@gmail.com (Robert Haas) wrote:
>> On Wed, Apr 21, 2010 at 11:06 AM, Rick wrote:
>> > I have a DB with small and large tables that can go up to 15G.
>> > For performance benefits, it appears
On Mon, Apr 26, 2010 at 7:24 PM, Anj Adu wrote:
> I have a 16G box and tmpfs is configured to use 8G for tmpfs .
>
> Is a lot of memory being wasted that can be used for Postgres ? (I am
> not seeing any performance issues, but I am not clear how Linux uses
> the tmpfs and how Postgres would be af
On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov wrote:
> Even if it will be done it does not solve the original issue. If I
> understood you right there is now no any decent way of speeding up the query
>
> select *
> from t2
> join t1 on t1.t = t2.t
> where t1.id = X;
>
> except of the propagating
On Fri, Apr 30, 2010 at 6:50 PM, Josh Berkus wrote:
> Which is the opposite of my experience; currently we have several
> clients who have issues which required more-frequent analyzes on
> specific tables.
That's all fine, but probably not too relevant to the original
complaint - the OP backed of
On Sat, May 1, 2010 at 12:13 PM, Scott Marlowe wrote:
> On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus wrote:
>> Which is the opposite of my experience; currently we have several
>> clients who have issues which required more-frequent analyzes on
>> specific tables. Before 8.4, vacuuming more fre
On Sat, May 1, 2010 at 1:11 PM, Greg Smith wrote:
> Robert Haas wrote:
>>
>> I don't have a stake in the ground on what the right settings are, but
>> I think it's fair to say that if you vacuum OR analyze much less
>> frequently than what we recommend my de
er side and then, within that, the ordering of
the inner side, presuming (not quite sure how to phrase this) that the
outer side is "unique enough" with respect to the ordering. I'm not
too sure whether there's anything useful we can do with this
information in a reasonable
aps dd).
One thing I've noticed (to my chagrin) is that if pg_restore is given
a set of options that are incompatible with parallel restore, it just
does a single-threaded restore. The options you've specified look
right to me, but, again, examining exactly what is going on during the
restor
ing at that plan might give you
a better idea what is really happening.
(Note that you might need to use PREPARE and EXPLAIN EXECUTE to get
the same plan the function is generating internally, rather than just
EXPLAIN.)
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres
would probably help, a lot.
The points already made about LIMIT are also right on target.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscri
about
> 10k matches (rows) it eats up all memory and I get "out of memory"
> error.
I'm not sure exactly what's happening in your particular case, but
there is some known suckage in this area.
http://archives.postgresql.org/pgsql-hackers/2010-05/msg00230.php
http://arc
cpu_xxx_cost parameters when you have a fully-cached
> database, but what I've seen people recommending for that condition
> is to set them both to the same value in the vicinity of 0.1 or 0.01
> or so. If it's only mostly cached you might try intermediate settings.
I have had t
space, right? Twice the columns = twice the
memory.
What I'd be curious to know is how accurate the memory estimates are -
figure out what the lowest value of work_mem needed to get a
particular plan is and then compare that to the amount of memory used
when you execute the query...
--
Robe
tgresql.org/pgadmin-support/
If you're looking for documentation of the explain format in general,
you might read the PostgreSQL documentation for explain.
http://www.postgresql.org/docs/current/static/sql-explain.html
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterpris
On Tue, Mar 2, 2010 at 12:23 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Feb 25, 2010 at 7:03 PM, Tom Lane wrote:
>>> Partially. There are stats now but autovacuum is not bright about
>>> when to update them.
>
>> Is that something you're plan
if that's accurate or
not...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
slowed things down by
50% between 8.4.3 and 8.4.4, there'd be an awful lot of people
screaming about it...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
ated to return 204,276 rows and actually
returned zero... it might work better to rewrite this part of the
query as a join, if you can:
node_id=(select node_id from dev4_act_node where node='10.90.100.2')
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgr
y performs much better on 8.1.9 on a similar sized
> table.(althought the random_page_cost=4 on 8.1.9 and 2 on 8.4.0 )
Well that could certainly matter...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-pe
a". It would be called "timestamp with time zone".
Can we see the index and table definitions of the relevant tables
(attached as a text file) and the size of each one (use select
pg_relation_size('name'))?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The
On Fri, Jun 4, 2010 at 12:40 AM, Yogesh Naik
wrote:
> I am performing a DB insertion and update for 3000+ records and while doing
> so i get CPU utilization
> to 100% with 67% of CPU used by postgres
That sounds normal to me. What would you expect to happen?
--
Robert Haas
Ent
and you haven't provided them... I think it's going
to be hard to troubleshoot this without seeing those definitions (and
also the sizes, which I asked for in my previous email).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsq
t carefully.
The default value for wal_buffers is 64kB. I can't imagine why you'd
need to increase that by four orders of magnitude. I'm not sure
whether it will cause you a problem or not, but you're allocating
quite a lot of shared memory that way that you might not rea
e
this to some degree by compiling PG with a larger block size. Have
you tried this to see whether/how much/for what kind of workloads it
helps?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-pe
l. And even to the extent
that you can identify the stuff that's performance-related, it's far
from obvious which things are most important. Any thoughts on that?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance ma
uld give you the ability to do this (by making all your tables
unlogged).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
h
t; 8.4 and above need that anymore. I thnk 8.3 does that too, but I'm
> not 100% sure.
8.4 (and 9.0) do still need to do vacuums to freeze tuples before
transaction ID wraparound occurs. This is not to be confused with
VACUUM FULL, which is something else altogether.
--
Robert Haas
Enterpr
On Wed, Jun 23, 2010 at 2:20 PM, Scott Marlowe wrote:
> On Wed, Jun 23, 2010 at 1:58 PM, Robert Haas wrote:
>> On Sun, Jun 20, 2010 at 4:13 PM, Scott Marlowe
>> wrote:
>>>> The largest consequence I can see at the moment is that when I get a
>>>> ful
Consider a non-crash-safe wal_level that eliminates WAL activity
>
> *
> http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php
I don't think we need a system-wide setting for that. I believe that
the unlogged tables I'm working on wil
gt;>>
>>> - Having ACID on the users table is of course a must ;
>>> - for the sessions table you can drop the "D"
>>
>> You're trying to solve a different use-case than the one I am.
>>
>> Your use-case will be solved by global temporary
relname
>> --
>> rownum_temp
>> (1 row)
>
> What's the full row? I'd just add a "WHERE relkind = 'r'" to the above
> query anyway.
Yeah - also, it would probably be good to call pg_relation_size on
pg_class.oid
On Thu, Jun 24, 2010 at 10:55 AM, Anj Adu wrote:
> What would you recommend to do a quick test for this? (i.e WAL on
> internal disk vs WALon the 12 disk raid array )?
Maybe just pgbench?
http://archives.postgresql.org/pgsql-performance/2010-06/msg00223.php
--
Robert Haas
EnterpriseDB
On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian wrote:
>> The patch also documents that synchronous_commit = false has
>> potential committed transaction loss from a database crash (as well as
>> an OS crash).
Is this actually true?
--
Robert Haas
EnterpriseDB: http://www.en
On Tue, Jun 29, 2010 at 9:32 AM, Bruce Momjian wrote:
> Robert Haas wrote:
>> On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian wrote:
>> >> The patch also documents that synchronous_commit = false has
>> >> potential committed transaction loss from a database cr
column might help a lot - then it
could consider index-scanning for the matching rows and sorting them
afterwards.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
010-06/msg00144.php
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
amount of
tuning of your existing schema is going to produce that kind of result
on PostgreSQL. Using the full-text search stuff, or a gin index of
some kind, might get you closer, but it's hard to beat a
special-purpose engine that implements exactly the right algorithm for
your use case.
--
heckpoint spike.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
results on the first call and stash
them in the FuncCallContext (funcctx->user_fctx); and then on
subsequent calls just return one row per call.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@post
-recursive term, the
optimizer would need to prove that this doesn't change the final
results. I think that's possible here because it so happens that your
recursive term only generates results that have the same name, date,
and tid as some existing result, but with a slightly differ
e in a position to sponsor a developer there are a
number of companies that will be happy to work with you.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
On Wed, Jul 7, 2010 at 9:31 AM, Michal Fapso wrote:
> thank you for your help. I tried to cluster the table on
> hyps_wordid_index and the query execution time dropped from 4.43 to
> 0.19 seconds which is not that far from Lucene's performance of 0.10
> second.
Dang. Nice!
ry became quick
> (~200ms). Now 1classic sata disk computer is faster than our little monster
> server !!
Have you tried running ANALYZE on the production server?
You might also want to try ALTER TABLE ... SET STATISTICS to a large
value on some of the join columns involved in the qu
When it's complete, WAL segments completely written
before the start of the checkpoint should be recyclable. Unless I'm
confused, which apparently I am.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-perfo
will maybe set enable_nestloop to off, but I don't think it's a
> good solution, other query have a chance to get slower.
Yeah, that usually works out poorly.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
arallel query execution. I'm glad there
are good ones available, but the fact that they're absolutely
necessary for good performance in some environments is not a feature.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performan
ociated with having a large number of
backends in the system. That's not a connection pooler either, but it
might reduce the need for one.
Still another approach is admission control based on transactions,
backends, queries, memory usage, I/O, or what have you.
None of these things are mutually ex
...) command
...which would set those variables just for that one command. But
honestly I'm not sure how much it'll help with query planner problems.
Disabling nestloops altogether, even for one particular query, is
often going to be a sledgehammer where you need a scalpel. But
em is harder for us because a backend can't switch identities
once it's been assigned to a database. I haven't heard an adequate
explanation of why that couldn't be changed, though.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer
wrote:
> So rather than asking "should core have a connection pool" perhaps
> what's needed is to ask "what can an in-core pool do that an external
> pool cannot do?"
Avoid sending every connection through an extra
On Thu, Jul 22, 2010 at 3:15 PM, Hannu Krosing wrote:
> On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:
>> On Mon, Jul 12, 2010 at 6:58 AM, Craig Ringer
>> wrote:
>> > So rather than asking "should core have a connection pool" perhaps
>> > what
ng a new process. The best case scenario is
that you can keep some of them around, in which case, great.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On Fri, Jul 23, 2010 at 11:58 AM, Hannu Krosing wrote:
> On Thu, 2010-07-22 at 20:57 -0400, Robert Haas wrote:
>> On Thu, Jul 22, 2010 at 3:15 PM, Hannu Krosing wrote:
>> > On Thu, 2010-07-22 at 14:36 -0400, Robert Haas wrote:
>> >> On Mon, Jul 12, 2010 at 6:58
pired()
function is not too speedy.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sat, Jul 24, 2010 at 2:23 AM, Craig Ringer
wrote:
> On 24/07/10 01:28, Robert Haas wrote:
>
>> Well, if we could change the backends so that they could fully
>> reinitialize themselves (disconnect from a database to which they are
>> bound, etc.), I don't see w
On Tue, Jul 27, 2010 at 4:40 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Jul 22, 2010 at 5:29 PM, Andres Freund wrote:
>>>> The problem is harder for us because a backend can't switch identities
>>>> once it's been assigned to a database. I
1 - 100 of 693 matches
Mail list logo