=?UTF-8?Q?Ondrej_Ivani=C4=8D?= writes:
> The question is why the parameter f is not exposed as a GUC?
What would that accomplish that default_statistics_target doesn't?
(Other than being much harder to explain...)
regards, tom lane
--
Sent via pgsql-performance mailing
Thanks guys.
Ah, Pl/java - of course. I would miss writing the code right in the SQL
script, but that would have been true of C as well.
None of these procedures really qualify as stored procs that move data;
rather they are scalar functions used for fuzzy string comparisons based on
our own dom
On 12/27/2011 05:54 PM, Merlin Moncure wrote:
On Tue, Dec 27, 2011 at 4:20 PM, Pavel Stehule wrote:
Hello
2011/12/27 Carlo Stonebanks:
We are currently using pltclu as our PL of choice AFTER plpgSql.
I'd like to know if anyone can comment on the performance costs of the
various PL language
On Tue, Dec 27, 2011 at 4:20 PM, Pavel Stehule wrote:
> Hello
>
> 2011/12/27 Carlo Stonebanks :
>> We are currently using pltclu as our PL of choice AFTER plpgSql.
>>
>> I'd like to know if anyone can comment on the performance costs of the
>> various PL languages BESIDES C. For example, does pltc
Hi,
On 28 December 2011 05:12, Tom Lane wrote:
> Possibly raising the stats target on emsg_messages would help.
In the function std_typanalyze() is this comment:
/*
* The following choice of minrows is based on the paper
* "Random sampling for histog
Hello
2011/12/27 Carlo Stonebanks :
> We are currently using pltclu as our PL of choice AFTER plpgSql.
>
> I'd like to know if anyone can comment on the performance costs of the
> various PL languages BESIDES C. For example, does pltclu instantiate faster
> than pltcl (presumably because it uses a
We are currently using pltclu as our PL of choice AFTER plpgSql.
I'd like to know if anyone can comment on the performance costs of the
various PL languages BESIDES C. For example, does pltclu instantiate faster
than pltcl (presumably because it uses a shared interpreter?) Is Perl more
lightweight
On 12/27/2011 11:00 AM, Scott Marlowe wrote:
On Tue, Dec 27, 2011 at 8:33 AM, Claudio Freire wrote:
On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky wrote:
work_mem = 128MB (tried 257MB, didn't change anything)
This is probably your problem.
Without an EXPLAIN output, I cannot be sure, bu
On Tue, Dec 27, 2011 at 9:14 AM, Andrew Dunstan wrote:
> It depends on the workload. Your 16M setting would make many of my clients'
> systems slow to an absolute crawl for some queries, and they don't run into
> swap issues, because we've made educated guesses about usage patterns.
Exactly. I'v
Jim Crate writes:
> My question is why does it do a seq scan when it flattens this
> subquery into a JOIN?
Because it thinks there will be 3783 rows out of the msg scan, which if
true would make your desired nestloop join a serious loser. You need to
see about getting that estimate to be off by
PostgreSQL 9.0.2
Mac OS X Server 10.6.8
Autovacuum is on, and I have a script that runs vacuum analyze verbose every
night along with the backup.
I have a situation where I'm experiencing a seq scan on a table with almost 3M
rows when my condition is based on a subquery. A google search turned
On Sat, Dec 24, 2011 at 12:22 PM, Michael Smolsky wrote:
> shared_buffers = 2GB (tried 8GB, didn't change anything)
> work_mem = 128MB (tried 257MB, didn't change anything)
As someone mentioned, lower is better here. 128M is quite high.
> effective_cache_size = 12GB (tried 2GB didn't change any
On Tue, Dec 27, 2011 at 1:00 PM, Scott Marlowe wrote:
> He can lower it for just that query but honestly, even on a machine
> with much more memory I'd never set it as high as he has it. On a
> busy machine with 128G RAM the max I ever had it set to was 16M, and
> that was high enough I kept a cl
On Tue, Dec 27, 2011 at 8:33 AM, Claudio Freire wrote:
> On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky wrote:
>> work_mem = 128MB (tried 257MB, didn't change anything)
>
> This is probably your problem.
>
> Without an EXPLAIN output, I cannot be sure, but 'work_mem' is not the
> total amount o
On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky wrote:
> work_mem = 128MB (tried 257MB, didn't change anything)
This is probably your problem.
Without an EXPLAIN output, I cannot be sure, but 'work_mem' is not the
total amount of memory a query can use, it's the amount of memory it
can use for
i am not using prepared statements for now :)
i just said that probably, if i do use them, i will get rid of that extra
time since the plan will be already 'decided' in advance
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp51029
Hello
2011/12/27 MirrorX :
> there are some performance issues on a server and by searching in the logs i
> noticed that the phases of parse and bind take considerably more time than
> execute for most of the queries. i guess that the right thing to do in this
> case is to use functions or prepare
thx a lot for your answer :)
2) default_statistics_target is set to (default) 100 and there no special
statistics per-column
3) i will do that very soon
4) in absolute units i can see the same query having similar stats to these:
parse -> 600 ms
bind -> 300 ms
execute -> 50 ms
the query mention
hello.
1. planning time > execute time, it can happen normally, for some
fast-executing queries, so it is not bad per se.
2. what are your statistics settings? they influence planning time. I
mean default_statistics_target and per-column SET STATISTICS?
3. upgrade to 8.4.10, it's quick upgrade (
the version of postgres is 8.4.7 :)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp5102940p5102954.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list (pgsql
there are some performance issues on a server and by searching in the logs i
noticed that the phases of parse and bind take considerably more time than
execute for most of the queries. i guess that the right thing to do in this
case is to use functions or prepare statements but in any case, what co
Hello
2011/12/23 Igor Schtein :
> I'd like to find some measurements/figures of query preparation and planning
> time to justify the use of prepared statements and stored procedures.
> I know that complex queries have larger preparation time. Though, is it
> possible to explicitly measure the time
Hello,
I'm running a fairly complex query on my postgres-8.4.9 Ubuntu box. The box
has 8-core CPU, 18G of RAM and no virtualization layer. The query takes many
hours to run. The query essentially involves a join of two large tables on a
common string column, but it also includes joins with oth
I'd like to find some measurements/figures of query preparation and
planning time to justify the use of prepared statements and stored
procedures.
I know that complex queries have larger preparation time. Though, is it
possible to explicitly measure the time the optimizer spends parsing and
plannin
24 matches
Mail list logo