many WAL are created. The amount of WAL between checkpoints
can vary. I don't have a good understanding about the interplay between
checkpoints and WAL.
I'd be grateful for any thoughts on how to improve this, and better control
the amount of WAL kept in pg_xlog.
Thank you,
Stefan
On 02.03.2017 02:06, Tom Lane wrote:
Stefan Andreatta writes:
The same anti-join using the text fields, however estimates just 1
resulting row, while there are still of course 9,999 of them:
=# explain analyze
select tmp_san_1.id
from tmp_san_1
left join tmp_san_2 on
r for
the join.
Thanks for any help,
Stefan
So, if I'm understanding you correctly, we're talking solely about
following clause in the query you gave initially:
WHERE doc.date_last_updated >= date(now() - '171:00:00'::interval)
which initially was
WHERE documenttype = 4
and now is being replaced by a temporary (I'd say derived) column
WHERE
2015-08-31 21:46 GMT+02:00 twoflower wrote:
> I created a new boolean column and filled it for every row in DOCUMENT with
> *(doc.date_last_updated >= date(now() - '171:00:00'::interval))*, reanalyzed
> ...
... and you've put an index on that new boolean column (say "updated")?
CREATE INDEX index
nd Regards
Stefan
Cell : 072-380-1479
Desk : 087-577-7241
On 2014/09/15 03:25 PM, Kevin Grittner wrote:
> "Van Der Berg, Stefan" wrote:
>
>> I get a similar plan selected on the original query if I set
>> enable_seqscan to off. I much prefer the second result.
>> M
changing
the original query? Is there some column level setting I can set?
(BTW the tables are analyzed, and I currently have no special
settings/attributes set for any of the tables.)
--
Kind Regards
Stefan
Cell : 072-380-1479
Desk : 087-577-7241
To read FirstRand Bank's Disclaimer for t
clue, which parameter I have to adjust, to get an
query-time like the example width 'enable_seqscan=off'.
Stefan
pd=> set enable_seqscan=off;
pd=> explain analyze select t.name from product p left join measurements m on
p.productid=m.productid inner join measurementstype t on
Hi Craig and Shawn
I fully agree with your argumentation.
Who's the elephant in the room who is reluctant to introduce explicit hints?
-S.
2014-04-14 17:35 GMT+02:00 Craig James :
> Shaun Thomas wrote:
>
>>
>>> these issues tend to get solved through optimization fences.
Reorganize a qu
error that I couldn't seem to get past, so I think it might be
invalid:
> ERROR: syntax error at or near "UNION"
> LINE 8: UNION (
> ^
So I landed on the version that I posted above, which seems to select the
same set in all of the cases that I tried.
An
re's no road map to introduce planner hinting (like in
EnterpriseDB or Ora)?
Regards, Stefan
2014-03-20 18:08 GMT+01:00 Tom Lane :
> Stefan Keller writes:
>> I'd like to know from the query planner which query plan alternatives
>> have been generated and rejected. Is thi
tual
time=0.007..0.015 rows=6 loops=2084)
Index Cond: (ancestor_key = collection_data.context_key)
Heap Fetches: 13007
Buffers: shared hit=14929
Total runtime: 76.431 ms
Why can't I get the Postgres 9.2.5 instance to use the optimal plan?
Thanks in advance!
/Stefan
--
-
Stefan Amshey
Hi,
I'd like to know from the query planner which query plan alternatives
have been generated and rejected. Is this possible?
--Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/
Hi Kevin
Well, you're right :-) But my use cases are un-specific "by design"
since I'm using FTS as a general purpose function.
So I still propose to enhance the planner too as Tom Lane and your
colleague suggest based on repeated similar complaints [1].
Your
has a text attribute which has
a length of more than 8K so it's obviously having to do with
detoasting.
But the thoughts about @@ operators together with this GIN index seem
also to be valid.
I hope this issue is being tracked in preparation for 9.3.
Regards, Stefan
2013/7/19 Marc Mamin
') query
> where query @@
> to_tsvector('pg_catalog.english',content);
... using default values for enable_seqscan and set random_page_cost.
Yours, S.
2013/7/19 Stefan Keller :
> Hi
>
> At 2013/2/8 I wrote:
>> I have problems with the performance of FTS in a query li
Hi Yuri and Radu-Stefan
I would'nt give too fast on PostgreSQL!
When looking at your query plan I wonder if one could reformulate the query
to compute the ST_DWithin first (assuming you have an index on the node
geometries!) before it filters the tags.
To investigate that you could formul
=> To me the planner should be updated to recognize immutable
plainto_tsquery() function in the WHERE clause and choose "Bitmap
Index Scan" at the first place.
What do you think?
Yours, Stefan
Lets look
EATE INDEX nodes_tags_btree_tourist_idx on nodes USING BTREE ((tags ?
'tourist));
Do you think this could make a difference?
On Mon, Jul 8, 2013 at 1:27 PM, Richard Huxton wrote:
> On 08/07/13 10:20, Radu-Stefan Zugravu wrote:
>
>> Any improvement is welcomed. The overall performan
created indexes for the tags column? It takes some time to create them
back.
On Mon, Jul 8, 2013 at 11:53 AM, Richard Huxton wrote:
> On 08/07/13 09:31, Radu-Stefan Zugravu wrote:
>
>> Hi,
>> Thank you for your answer.
>> My EXPLAIN ANALYZE output can be found here:
>&g
Mon, Jul 8, 2013 at 10:44 AM, Richard Huxton wrote:
> On 07/07/13 08:28, Radu-Stefan Zugravu wrote:
>
>> Each node has a geometry column called geom and a hstore column
>> called tags. I need to extract nodes along a line that have certain
>> keys in the tags column. To do
wing query:
CREATE INDEX nodes_tags_idx ON nodes USING GIN(tags);
After creating the index I searched again for nodes using the same first
query but there is no change in performance.
How can I properly use GIN and GIST to index tags column so I can faster
search for nodes that have a certain key in tags column?
Thank you,
Radu-Stefan
start/end date and a region, and table
promo2mission (each 1 to dozen tupels).
* View all_errors (more than 20'000 tubles, based on table errors
without tupels from table fix)
* Table error_type (7 tupels)
Here's the EXPLAIN ANALYZE log: http://explain.depesz.com/s/tbF
Yours,
/end date and a region, and table
promo2mission (each 1 to dozen tupels).
* View all_errors (more than 20'000 tubles, based on table errors
without tupels from table fix)
* Table error_type (7 tupels)
Here's the EXPLAIN ANALYZE log: http://explain.depesz.com/s/tbF
Yours, Stefan
CTE
ne is overbudgetted, 4x the memory of the entire database
~4GB, and uses the PostgreSQL stock settings.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi Jesper and Pavel
Thx for your hints.
I'm rather reluctant in tuning with unwanted side effects, We'll see.
I have to setup my system and db again before I can try out your tricks.
Yours, Stefan
2013/2/8 Jesper Krogh :
> On 08/02/13 01:52, Stefan Keller wrote:
>>
>>
"@@" and GIN index) is an open issue (since
years?).
And I found a nice blog here [1] which uses 9.2/9.1 and proposes to
disable sequential table scan (SET enable_seqscan off;). But this is
no option for me since other queries still need seqscan.
Can anyone tell me if is on some agenda her
n our databases. And
considering the commonplace conditions leading to it, I would expect
many systems to be affected. But searching the forums and the web I
hardly found any references to it - which amazes me to no end.
Best Regards,
Stefan
On 12/30/2012 07:02 PM, Stefan Andreatta wrote:
On
On 12/29/2012 10:57 PM, Peter Geoghegan wrote:
On 29 December 2012 20:57, Stefan Andreatta wrote:
...
The general advice here is:
1) Increase default_statistics_target for the column.
I tried that, but to get good estimates under these circumstances, I
need to set the statistics_target
re are more than just this one memory related value, that need to be
changed for optimal performance. E.g. effective_cache_size can have a
direct effect on use of nested loops. See:
http://www.postgresql.org/docs/9.2/static/runtime-config-query.html
Regards,
Stefan
--
Sent via pgsql-p
On 12/29/2012 10:57 PM, Peter Geoghegan wrote:
On 29 December 2012 20:57, Stefan Andreatta wrote:
Now, the 2005 discussion goes into great detail on the advantages and
disadvantages of this algorithm, particularly when using small sample sizes,
and several alternatives are discussed. I do not
implemented.
Thanks for your help!
Stefan
*The Long Story:*
When Postgres collects statistics, it estimates the number of distinct
values for every column (see pg_stats.n_distinct). This is one important
source for the planner to determine the selectivity and hence can have
great influence
apt itself when partitions are attached or removed.
That's probably how Oracle resolves it which knows global indexes
probably since version 8(!) [1]
Yours, S.
[1] http://www.oracle-base.com/articles/8i/partitioned-tables-and-indexes.php
2012/10/14 Jeff Janes :
> On Sat, Oct 13, 2012 at
Hi,
Given I have a large table implemented with partitions and need fast
access to a (primary) key value in a scenario where every minute
updates (inserts/updates/deletes) are coming in.
Now since PG does not allow any index (nor constraint) on "master"
table, I have a performance issue (and a po
takes longer than an hour, it delays the next
update.
Any ideas? Partitioning?
Yours, S.
2012/9/3 Ivan Voras :
> On 03/09/2012 13:03, Stefan Keller wrote:
>> Hi,
>>
>> I'm having performance issues with a simple table containing 'Nodes'
>> (points)
on-durable settings [1] I'd like to know what
choices I have to tune it while keeping the database productive:
cluster index? partition table? use tablespaces? reduce physical block size?
Stefan
[1] http://www.postgresql.org/docs/9.1/static/non-durability.html
--
Sent via pgsql-per
Hi
2012/8/8 Jeff Janes :
> On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller wrote:
>> Hi Craig
>>
>> Clever proposal!
>> I slightly tried to adapt it to the hstore involved.
>> Now I'm having a weird problem that PG says that "relation 'p' doe
Hi Craig
Clever proposal!
I slightly tried to adapt it to the hstore involved.
Now I'm having a weird problem that PG says that "relation 'p' does not exist".
Why does PG recognize table b in the subquery but not table p?
Any ideas?
-- Stefan
SELECT b.way AS building_g
Your proposal lacks the requirement that it's the same building from
where pharmacies and schools are reachable.
But I think about.
Yours, S.
2012/8/7 Tomas Vondra :
> On 7 Srpen 2012, 14:01, Stefan Keller wrote:
>> Hi
>>
>> I have an interesting query to be optimi
Hi
I have an interesting query to be optimized related to this one [1].
The query definition is: Select all buildings that have more than 1
pharmacies and more than 1 schools within a radius of 1000m.
The problem is that I think that this query is inherently O(n^2). In
fact the solution I propos
2012/3/1 Jeff Janes :
> On Tue, Feb 28, 2012 at 3:46 PM, Stefan Keller wrote:
>> 2012/2/28 Claudio Freire :
>>>
>>> In the OP, you say "There is enough main memory to hold all table
>>> contents.". I'm assuming, there you refer to your curre
2012/2/29 Stefan Keller :
> 2012/2/29 Jeff Janes :
>>> It's quite possible the vacuum full is thrashing your disk cache due
>>> to maintainance_work_mem. You can overcome this issue with the tar
>>> trick, which is more easily performed as:
>>>
>>
many implementations, that will not work. tar detects the
> output is going to the bit bucket, and so doesn't bother to actually
> read the data.
Right.
But what about the commands cp $PG_DATA/base /dev/null or cat
$PG_DATA/base > /dev/null ?
They seem to do something.
-Ste
2012/2/28 Claudio Freire :
> On Tue, Feb 28, 2012 at 5:48 PM, Stefan Keller wrote:
>> P.S. And yes, the database is aka 'read-only' and truncated and
>> re-populated from scratch every night. fsync is off so I don't care
>> about ACID. After the inde
lity which I'll try out.
But what I'm finally after is a solution, where records don't get
pushed back to disk a.s.a.p. but rather got hold in memory as long as
possible assuming that there is enough memory.
I suspect that currently there is quite some overhead because of that
(besides disk-oriented structures).
-Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n they fit into RAM).
Since I have a "read-only" database there's no WAL and locking needed.
But as soon as we allow writes I realize that the in-memory feature
needs to be coupled with other enhancements like replication (which
somehow would avoid WAL).
Yours, Stefan
--
Sent via p
2012/2/26 Andy Colson wrote:
> On 02/25/2012 06:16 PM, Stefan Keller wrote:
>> 1. How can I warm up or re-populate shared buffers of Postgres?
>> 2. Are there any hints on how to tell Postgres to read in all table
>> contents into memory?
>>
>> Yours, Stefan
>
Fileystem (tmpfs).
Still, would'nt it be more flexible when I could dynamically instruct
PostgreSQL to behave like an in-memory database?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tells me that the indexes are used [2].
The problem is that the initial queries are too slow - and there is no
second chance. I do have to trash the buffer every night. There is
enough main memory to hold all table contents.
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are the
Hi Stephen
Thanks for your answer and hints.
2011/10/24 Stephen Frost wrote:
> * Stefan Keller (sfkel...@gmail.com) wrote:
>> Adding more memory (say to total of 32 GB) would only postpone the problem.
> Erm, seems like you're jumping to conclusions here...
Sorry. I actual
at could I do to speed up such queries (first time, i.e.
without caching) besides simply adding more memory?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi,
Sorry if this is an odd question:
I assume that Postgres indexes don't store records but only pointers
to the data.
This means, that there is always an additional access needed (real table I/O).
Would an index containing data records make sense?
Stefan
--
Sent via pgsql-performance ma
operations.
Stefan
P.S. Disclaimer (referring to my other thread about Hash): I'm not a
btree opposer :-> I'm just evaluating index alternatives.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/m
collected this to encourage ourselves that enhancing hash
indexes could be worthwhile.
Stefan
2011/9/18 Kevin Grittner :
> Stefan Keller wrote:
>
>> It's hard for me to imagine that btree is superior for all the
>> issues mentioned before.
>
> It would be great i
xploiting all advantages of a separate hash
index, would'nt it?
Stefan
2011/9/18 Merlin Moncure :
> On Sat, Sep 17, 2011 at 4:48 PM, Jeff Janes wrote:
>> On Tue, Sep 13, 2011 at 5:04 PM, Peter Geoghegan
>> wrote:
>>> On 14 September 2011 00:04, Stefan Keller wrot
roposal
* more... ?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
e's doubtless room for more
> improvements, so why are the knives out?
No knives from my side. Sorry for the exaggerated subject title.
I'm also in favor for an enhanced hash index for cases where only "="
tests are processed and where only few inserts/deletes will occur.
Stefan
How much of work (in man days) do you estimate would this mean for
someone who can program but has to learn PG internals first?
Stefan
2011/9/14 Tom Lane :
> Peter Geoghegan writes:
>> On 14 September 2011 00:04, Stefan Keller wrote:
>>> Has this been verified on a rec
ould I open a ticket?
Stefan
2011/9/14 Tom Lane :
> Peter Geoghegan writes:
>> On 14 September 2011 00:04, Stefan Keller wrote:
>>> Has this been verified on a recent release? I can't believe that hash
>>> performs so bad over all these points. Theory tells me other
ypes.html )
Has this been verified on a recent release? I can't believe that hash
performs so bad over all these points. Theory tells me otherwise and
http://en.wikipedia.org/wiki/Hash_table seems to be a success.
Are there any plans to give hash index another chance (or to bury it
with a reas
ostgresql.org/docs/current/interactive/plpgsql-control-structures.html
)
So the doc isn't totally explicit about this. But whatever: What would
be the the function of a subtransaction? To give the possibility to
recover and continue within the surrounding transaction?
Stefan
2011/9/13 Marti Rauds
Shaun,
2011/9/2 Shaun Thomas :
> Ironically, this is actually the topic of my presentation at Postgres Open.>
Do you think my problem would now be solved with NVRAM PCI card?
Stefan
-- Forwarded message --
From: Stefan Keller
Date: 2011/9/3
Subject: Re: [PERFORM] Summar
2011/9/3 Jesper Krogh :
> On 2011-09-03 00:04, Stefan Keller wrote:
> It's not that hard to figure out.. take some of your "typical" queries.
> say the one above.. Change the search-term to something "you'd expect
> the user to enter in a minute, but hasn'
2011/9/2 Scott Marlowe :
> On Tue, Aug 30, 2011 at 11:23 AM, Stefan Keller wrote:
> How big is your DB?
> What kind of reads are most common, random access or sequential?
> How big of a dataset do you pull out at once with a query.
>
> SSDs are usually not a big winner for r
You mean something like "Unlogged Tables" in PostgreSQL 9.1 (=
in-memory database) or simply a large ramdisk?
Yours, Stefan
2011/9/1 Jim Nasby :
> On Aug 30, 2011, at 12:23 PM, Stefan Keller wrote:
>> I'm looking for summaries (or best practices) on SSD usage with Postg
Hi,
I'm looking for summaries (or best practices) on SSD usage with PostgreSQL.
My use case is mainly a "read-only" database.
Are there any around?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
tore are
stored in order of (keylength,key) with the key comparison done
bytewise (not locale-dependent). See e.g. function hstoreUniquePairs
in http://doxygen.postgresql.org/ . This ordered property is being
used by some hstore functions but not all - and I'm still wondering
why.
Yours, S
wed by the core geometric data types!
Why names? Why not rather 'operators' or 'functions'?
What does this "reversed from the convention" mean concretely?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
in application where users can enter arbitrary
queries.
Yours, Stefan
2011/5/25 Pierre C :
>> You wrote
>>>
>>> Try to create a btree index on "(bench_hstore->bench_id) WHERE
>>> (bench_hstore->bench_id) IS NOT NULL".
>>
>> What do you
:
> CREATE TABLE myhstore ( id bigint PRIMARY KEY, kvps hstore NOT NULL );
So I'm doing something like:
CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps);
Stefan
2011/5/23 Pierre C :
>
>> Hi Merlin
>>
>> The analyze command gave the following result:
>
ase is coming from OpenStreetMap
(http://wiki.openstreetmap.org/wiki/Database_schema ).
Yours, Stefan
2011/5/17 Jim Nasby :
> On May 16, 2011, at 8:47 AM, Merlin Moncure wrote:
>> On Sat, May 14, 2011 at 5:10 AM, Stefan Keller wrote:
>>> Hi,
>>>
>>> I am condu
IMARY KEY, obj hstore NOT NULL );
-- with GIST index on obj
Does anyone have experience with that?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ng joins, sorts, equality and range (sub-)queries...
=> What are the suggested postgresql.conf and session parameters for
such a "read-only database" to "Whac-A-Mole" (i.e. to consider :->)?
Stefan
2011/4/23 Robert Haas :
> On Apr 18, 2011, at 6:08 PM, Stefan Kelle
on speeding up/optimizing such database server?
Yours, Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ng maintenance operations.
I find it quite strange that people seem to be surprised by Dell now
starting with that as well (I atually find it really surprising they
have not done that before).
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
tream is releasing a security update, I'd like to be able to find
new packages as upstream announces updated sets. Yes, I'm talking about
PostgreSQL here.
This is exactly what Debian does for a while now(at least for PostgreSQL)..
Ie.: Debian Etch aka has 8.1.18 and Debian Lenny has 8.
Stefan Kaltenbrunner wrote:
Devrim GÜNDÜZ wrote:
On Mon, 2009-10-05 at 12:07 +0200, Jean-Michel Pouré wrote:
Go for Debian:
* It is a free community, very active.
Well, we need to state that this is not a unique feature.
* It is guaranteed to be upgradable.
Depends. I had lots of issues
Scott Carey wrote:
On 7/30/09 11:24 AM, "Stefan Kaltenbrunner" wrote:
Kevin Grittner wrote:
Tom Lane wrote:
"Kevin Grittner" writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to u
n) I would recommend to not use it at all.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
single threaded restore in a pipe: 188min
custom dump to file + parallel restore: 179min
this is without compression, with the default custom dump + parallel
restore is way slower than the simple approach on reasonable hardware.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance
Greg Smith wrote:
On Wed, 29 Jul 2009, Stefan Kaltenbrunner wrote:
Well the real problem is that pgbench itself does not scale too well
to lots of concurrent connections and/or to high transaction rates so
it seriously skews the result.
Sure, but that's what the multi-threaded pgbench
at are closer to what
pgbench does in the lab)
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
53 1.7055 InputFunctionCall
37050 1.6433 LWLockAcquire
36853 1.6346 BufferGetBlockNumber
36428 1.6157 heap_compute_data_size
33818 1.5000 DetermineTimeZoneOffset
33468 1.4844 DecodeTime
30896 1.3703 tm2timestamp
30888 1.3700 GetCurrentTransactionId
Stefan
--
Sent via pgsq
Greg Stark wrote:
On Mon, May 11, 2009 at 5:05 PM, Stefan Kaltenbrunner
wrote:
Good to know!!! I imagine that on a PS3 it would be _really_ fast... :-)
well not really - while it is fairly easy to get postgresql running on a PS3
it is not a fast platform. While the main CPU there is a pretty
hence what prompted my question on how the benchmark was operating.
For any kind of workloads that contain frequent connection
establishments one wants to use a connection pooler like pgbouncer(as
said elsewhere in the thread already).
Stefan
--
Sent via pgsql-performance mailing list (
Dimitri wrote:
Hi Stefan,
sorry, I did not have a time to bring all details into the toolkit -
but at least I published it instead to tell a "nice story" about :-)
fair point and appreciated. But it seems important that benchmarking
results can be verified by others as well...
e information on how the toolkit talks to the
database - some of the binaries seem to contain a static copy of libpq
or such?
* how many queries per session is the toolkit actually using - some
earlier comments seem to imply you are doing a connect/disconnect cycle
for every query ist that
t only has 256MB of Ram and a single SATA disk
available(though you could add some USB disks).
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that though.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
l variations a few dozend times both in cached and
uncached state and you should see the difference getting leveled out.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
maintenance_work_setting that large ? - try reducing to a say 128MB for
a start and try again.
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
m)
Total runtime: *109*ms
so in both querys there are and conditions. there are two and conditions in the
first query and one and condition in the second query. unfortunately i am not
an expert in reading the postgre query plan. basically i am wondering why in
the first query a second index sca
27;ll try to get a
bit more money from the management and build RAID 6 with 12 disks.
Here a good SATA-Controllers for 4/8/12/16-Disks:
http://www.tekram.com/product2/product_detail.asp?pid=51
Stefan
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
d benefits from
HOT) - it is a fairly neat improvement though ...
Stefan
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
Joshua D. Drake wrote:
> Stefan Kaltenbrunner wrote:
>> Joshua D. Drake wrote:
>>> Gregory Stark wrote:
>>>> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>>>>> You're right, but the distinction is a small one. What are the chances
&g
are on the same
UPS they are affectively on the same power bus ...
If the UPS fails (or the generator is not kicking in which happens way
more often than people would believe) they could still fail at the very
same time
Stefan
---(end of broadcast)-
s is that proposal different from what got implemented with:
http://archives.postgresql.org/pgsql-committers/2007-05/msg00315.php
Stefan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
cho
is your settings for:
effective_cache_size
and
random_page_cost
Stefan
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
tgresql.org/docs/faqs.FAQ_Solaris.html):
"Do not use any flags that modify behavior of floating point operations
and errno processing (e.g.,-fast). These flags could raise some
nonstandard PostgreSQL behavior for example in the date/time computing."
Stefan
--
that the configuration file it generates
seems to look like on for PostgreSQL 7.x or something - I think we
should just include the specific parameters to change.
Stefan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
1 - 100 of 130 matches
Mail list logo