has quite high impact on the
sorrounding updates. (9.2 moving to 9.3 reallly soon and looking forward for
this enhancement.
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
it if the referenced tuple is modified
> on any indexed column, not just those that are actually used in
> foreign keys. Maybe this case would be sped up if we optimized that.
Even if it is an gin index that is being modified? seems like a harsh
limitation to me.
Jesper
--
Sent
ation that the
> probe is meant to fix.
Apparently it is waiting for locks, cant the check be make in a
"non-blocking" way, so if it ends up waiting for a lock then it just
assumes non-visible and moves onto the next non-blocking?
This stuff is a 9.2 feature right? What was the original
ast-query-PG-9-2-td5769363.html
>
> The issues are:
>
> 1) The planner actually queries the relation to find the end points of the
> variable ranges, rather than using potentially out-of-date statistics.
>
In my app i would prefer potentially out-of-date statistics instead.
Jesper
just very
interested in what explain then does if it is not only the time for the
query plan. When I did try the "PREPARE / EXECUTE" dance as you
described .. i didnt see the prepare state take time, which seems to be
consistent with that the planning time is in the EXECUTE step according
to the documentation.
--
Jesper
t; Index Only Scan using tablepro_seqid_idx on tablepro
(cost=0.00..2.81 rows=1 width=4)
Index Cond: (tablepro_id = table.id)
(7 rows)
Time: 10458.404 ms
The query gives 4 rows out of 50.000.000, so the query-plan is actually
correct and as expected.
Any suggestions?
Jesper
--
S
exings shouldnt be nessesary. Autovacuum
has build in logic to sleep inbetween operations in order to reduce the
IO-load of you system for the benefit of concurrent users. The approach
of duplicate indices will pull all the resources it can get and
concurrent users may suffer while you d
en you
doubled the amount of memory
in a mainly memory cached database the performance is extremely sensitive to
memory speed
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
e column and
re-analyze, see what that gives.
I have also been playing with the cost-numbers in order to get it to favour
an index-scan more often. That is lowering random_page_cost to be close to
seq_page_cost, dependent on your system, the amount of memory, etc, then
this can have negative side-
st that instead
then you'll allow PG to garther statistics on the column and make the
query-planner act according to that.
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y the fast bitmap heap scan instead of the slow seq scan?
The big hammer is: "set enable_seqscan = off", but if you tell which PG
version you're on there may be something to do. I suggest you'd start by
bumping the statistics target for the column to 1 and run analyze to
see what that changes.
--
Jesper
everything.
http://www.postgresql.org/docs/9.1/static/sql-altertable.html
Let us know what you get.?
Jesper
ving the fts index next to
all the
other data saves a significant amount of development time in the application
both in terms of development and maintaince. (simpler, easier and more
manageble).
--
Jesper
st installing that in the
maintance window would allow the system to self-heal over time.
If the maintaince window allows for more cleanup, then manually
do some deletions. Now the black-box is self-healing.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
umn
is not calculated correctly. This is completely parallel with
http://archives.postgresql.org/pgsql-hackers/2011-11/msg01754.php
Try raising the cost for ts_match_vq(tsvector,tsquery) that help a bit, but
its hard to get the cost high enough.
Raising statistics target helps too..
--
Jesper
--
Sent
ich also may benefit the planning:
http://www.postgresql.org/docs/9.1/static/release-9-1.html
Improve GIN index scan cost estimation (Teodor Sigaev)
Jesper
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://
0ms then
I'd start to investigate if query-plans are correct .. and so on..
The above numbers are "raw-data" size and now how PG uses them.. or?
And you havent told anything about the size of your current system.
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
just put more memory in the server?
It'll be a lot cheaper than SSDs
It is "really expensive" to go over 512GB memory and the performance
regression for
just hitting disk in a system where you assume everything is in memory is
really huge. SSD makes the "edge" be a bi
d if you need those the SAN is a good way to go, but
they do come with a huge pricetag.
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
patches apply quite cleanly to 9.0 as far
as I remember.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n every now and then /15-30 minutes)
would certainly do the trick . not pretty but we're solving problems not
talking about beauty. If you can ensure sufficient memory then it should
cause any problems.. if you graph the timing of above command you should
even be able to see when
ake sense for a PostgreSQL server, performance wise?
I have one CacheCade setup... not a huge benefit but it seems
measurable. (but really hard to test). .. compared to a full
SSD-setup I wouldn't consider it at all.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
case which works perfectly well:
explain analyze select max(id) from appqosdata.tcpsessions;
Typically this is due to "batch load" and failing to run "analyze"
manually afterwards.. is this the case?
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
On 2011-05-16 06:41, Jesper Krogh wrote:
On 2011-05-16 03:18, Greg Smith wrote:
You can't do it in real-time. You don't necessarily want that to
even if it were possible; too many possibilities for nasty feedback
loops where you always favor using some marginal index that happens
numbers out
of it.
--
Jesper
red among other systems). Examples
are Dell MD1200/1220 or similary.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
what your end looks like.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
>
>
> How do DB folks do this with small maintenance windows? This is for a
> very high traffic website so it's beginning to get embarrassing.
Normally there is no need to issue reindex. What's your reason for the need?
Jesper
>
numbers to get excact nowadays).
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
, 4 socket even worse.
So the more sockets first begin to kick in when you can actually
use the CPU's or add in even more memory to keep your database
from going to disk due to size.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
topic slightly.. Has anyone opinions/experience with:
http://www.ocztechnology.com/ocz-z-drive-r2-p88-pci-express-ssd.html
They seem to be "like" the FusionIO drives just quite a lot cheaper,
wonder what the state of those 512MB is in case of a power-loss.
--
Jesper
--
Sent via pgsql-perf
88ELP and a HP D2700 enclosure.
Works flawlessly, the only bad thing (which actually is pretty grave)
is that the controller mis-numbers the slots in the enclosure, so
you'll have to have the "mapping" drawn on paper next to the
enclosure to replace the correct disk.
--
Jesper
ops. That is the
current trade off.
Thats also my experience.
--
Jesper
can you elaborate a bit more?
Jesper
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y significant number
of rotating drives. (a good backup plan with full WAL-log to a second
system as an example).
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
;
mike=# EXPLAIN ANALYZE SELECT * FROM directory WHERE id_user = 4;
Should be written as:
select * from directory where __mod_cons_hash(id_user,4) = 4%4;
Then it should just work.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On 2010-10-28 15:13, Merlin Moncure wrote:
On Wed, Oct 27, 2010 at 3:47 PM, Jesper Krogh wrote:
On 2010-10-27 20:51, Merlin Moncure wrote:
Yes, I am quite aware of how the o/s page cache works. All else being
equal, I more compact database obviously would be preferred. However
ql-hack...@postgresql.org/msg159726.html
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
your statements are true.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t seem to be in that category..
... All given my total lack of insight into ZFS.
--
Jesper
lesystem are you using? Readahead?
Can you try to check the filesystemfragmentation of the table using filefrag?
--
Jesper
t it coordinated by the PG project itself.
But
I can see that it is really hard to do that kind of stuff. And you would
still
face the challenge about who should end up doing the thing.
Jesper .. dropped Joshua Drake on CC, he might have given all of this some
seconds of thought allready.
--
Jes
onvince my boss to chip in... but how do we get the task
up there.. should we find one to give an estimate first?
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ill a huge task though.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nd up having no avg tuple size less than 100bytes.
.. without having complete insigt.. a visibillity map that could be used in
conjunction with indices would solve that. What the cost would be
of maintaining it is also a factor.
Jesper
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-
and for small systems it would only fill 10% more of diskspace... ..
.. last night I spend an hour looking for where its done but couldnt
find the
source-file where extention of an existing relation takes place.. can
someone give directions?
--
Jesper
gresql.org/action/patch_view?id=350
http://www.sai.msu.su/~megera/wiki/knngist
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
"ANALYZE";
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
pg_start_backup() before you did your base backup?
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ql.org/pgsql-hackers/2010-06/msg01438.php
(at least as I see it, but I'm fully aware that there is stuff I dont
know of)
I dont think a build-in connection-poller (or similiar) would in any
way limit the actions and abillities of an external one?
* Both numbers wildly guessed..
--
Jesper
enefit
for you.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
argest consequence I can see at the moment is that when I get a
full vacuum (for preventing transaction-id wraparound) it would be
run with the same aggressive settings, thus giving a real performance
hit in that situation.
Has anyone tried to do similar? What is your experience?
Is the idea totally
cost" so if you run it a lot you might
benefit from
a prepared statement.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 2010-05-30 20:34, Tom Lane wrote:
Jesper Krogh writes:
testdb=# set seq_page_cost = 0.1;
SET
testdb=# set random_page_cost = 0.1;
SET
Well, hmm, I really doubt that that represents reality either. A page
access is by no means "free" even when the page is alread
e query in fully cached mode (two
runs). the bitmap-heap-scan
is still hugely favorable in actual runtime. (which isn't that much a
suprise) but it seems strange that the
index-scan is still favored in the cost calculations?
I have tried to alter the cost of ts_match_vq but even setting it
Option 2:
App Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and
2x 146GB 15k RPM SAS (RAID1) disks
you didnt mention your dataset size, but i the second option would
be preferrable in most situations since it gives more of the os memory
for disc caching. 12 gb vs 4 gb for
lly scan a table of nearly 2
million rows?! I don't see how that makes sense irrespective of the
query being LIMITed.
If it matters setting enable_seqscan OFF does not impact the results.
No, because you end up in index-scans on non-gin indexes in that
situtaion.. so turning seqscan off has no effect.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 2010-04-09 20:22, Greg Smith wrote:
Jesper Krogh wrote:
I've spent quite some hours googling today. Am I totally wrong if the:
HP MSA-20/30/70 and Sun Oracle J4200's:
https://shop.sun.com/store/product/53a01251-2fce-11dc-9482-080020a9ed93
are of the same type just from "major&
On 2010-04-09 17:27, Greg Smith wrote:
Jesper Krogh wrote:
Can someone shed "simple" light on an extremely simple question.
How do you physicallly get 48 drives attached to an LSI that claims to
only have 2 internal and 2 external ports?
(the controller claims to support up to
looking for an well performing controller
with BBWC for the setup. So I was looking for something like the LSI888ELP.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
wn PG using this patch to toy around with criteria
to send the "less frequently used data" to a TOAST table.
http://article.gmane.org/gmane.comp.db.postgresql.devel.general/135158/match=
Google "vertical partition" for more, this is basically what it is.
(I belive this could ben
te-back.
Hi Kevin.
Nice report, but just a few questions.
Sorry if it is obvious.. but what filesystem/OS are you using and do you
have BBU-writeback on the main data catalog also?
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
.
> Hmm.. and I wonder if putting evalue into the criteria to cluster
> the tables too (i.e. cluster on id1,evalue) if you could make it
> so the limit finds the right 100 evalues first for each table
I didnt cluster it, since clustering "locks everything".
--
J
ess:
Does a "bitmap index scan" support ordering and limit ?
Does a "multicolummn gist index" support ordering and limit ?
Have I missed anything that can hugely speed up fetching these (typically
100 rows) from the database.
--
Jesper
--
Sent via pgsql-performance mailin
Greg Williamson wrote:
> Jesper --
>
> I apologize for top-quoting -- a challenged reader.
>
> This doesn't directly address your question, but I can't help but
> notice that the estimates for rows is _wildly_ off the actual number
> in each and every query. Ho
ndex Scan using funcid_prority_idx2 on job
(cost=0.00..7959152.95 rows=3962674 width=6) (actual time=0.054..0.640
rows=10 loops=1)
Index Cond: (funcid = 3)
Total runtime: 0.687 ms
(4 rows)
So what I see is that "top 10" takes < 1ms, top 50 takes over 500 times
more, and top 100
type
of query.
As far as I can see the only consequence of simply not remove stop-words
at all is a (fairly small) increase in index-size. It seems to me that
stop-words were invented when it was hard to get more than 2GB of memory
into a computer to get the index-size reduced to a size that better
could f
cally be stored in
a TOAST table and compressed. Search the manual for toast.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
commonterm60');
id
(0 rows)
Time: 17.006 ms
and x17
Just trying to say that the body of the problem isn't a discussion about
stop-words.
That being said, if you coin the term "stopword" to mean "any term that
exists in all or close to all documents" then the w
rence of x300 in execution time. (grows with
document-base-size).
this can now be reproduced using:
* http://krogh.cc/~jesper/fts-queryplan.pl and
http://krogh.cc/~jesper/words.txt
It build up a table with 200.000 documents where "commonterm" exists in
all of them. "nonexistingterm&
> On Mon, Oct 26, 2009 at 4:02 PM, Jesper Krogh wrote:
>> Given that the seq-scan have to visit 50K row to create the result and
>> the bitmap heap scan only have to visit 40K (but search the index) we
>> would expect the seq-scan to be at most 25% more expensive than th
he
actual number it seems to be preferrablem. Thats about query-planning,
my main reason for posting was the actual run time.
> By the way, for the 8.4 test I modifed the loader script so it wouldn't
> take quite so painfully long to run second time 'round. I turned
> autoco
Craig Ringer wrote:
> On Tue, 2009-10-27 at 06:08 +0100, Jesper Krogh wrote:
>
>>> You should probably re-generate your random value for each call rather
>>> than store it. Currently, every document with commonterm20 is guaranteed
>>> to also have commonterm40,
Craig Ringer wrote:
> On Mon, 2009-10-26 at 21:02 +0100, Jesper Krogh wrote:
>
>> Test system.. average desktop, 1 SATA drive and 1.5GB memory with pg 8.4.1.
>>
>> The dataset consists of words randomized, but .. all records contains
>> "commonterm", ar
Hi.
I'm currently trying to figure out why the tsearch performance seems to
vary a lot between different queryplans. I have created a sample dataset
that sort of resembles the data I have to work on.
The script that builds the dataset is at:
http://krogh.cc/~jesper/build-test.pl
and
Tom Lane wrote:
> Jesper Krogh writes:
>> Is is possible to manually set the cost for the @@ operator?
>
> You want to set the cost for the underlying function.
alter function ts_match_vq(tsvector,tsquery) cost 500
seems to change my test-queries in a very positive way (e.g. re
Scott Marlowe wrote:
> On Fri, Oct 23, 2009 at 2:32 PM, Jesper Krogh wrote:
>> Tom Lane wrote:
>>> Jesper Krogh writes:
>>>> Tom Lane wrote:
>>>>> ... There's something strange about your tsvector index. Maybe
>>>>> it's re
Tom Lane wrote:
> Jesper Krogh writes:
>> Tom Lane wrote:
>>> ... There's something strange about your tsvector index. Maybe
>>> it's really huge because the documents are huge?
>
>> huge is a relative term, but length(ts_vector(body)) is about 200
Tom Lane wrote:
> Jesper Krogh writes:
>> It seems to me that the row estimates on a ts_vector search is a bit on
>> the low side for terms that is not in th MCV-list in pg_stats:
>
> tsvector has its own selectivity estimator that's not like plain scalar
> equ
rare-term. The above
algorithm doesnt give me the 33 rows about, so can anyone shortly
describe the changes for this algorithm when using ts_vectors?
Thanks.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t=0.00..1023249.39 rows=5509293 width=4)
Filter: (ftsbody_body_fts @@ to_tsquery('reallyrare |
reallycommon'::text))
(2 rows)
> 2. A variant to_tsquery_with_sorting() which would take the column-name
> or something and look up the stats to work against.
Does above not seem like its the
> On Fri, 2009-10-23 at 07:18 +0200, Jesper Krogh wrote:
>> > In effect, what you want are words that aren't searched (or stored) in
>> > the index, but are included in the tsvector (so the RECHECK still
>> > works). That sounds like it would solve your proble
Jeff Davis wrote:
> On Thu, 2009-10-22 at 18:28 +0200, Jesper Krogh wrote:
>> I somehow would expect the index-search to take advantage of the MCV's
>> informations in the statistics that sort of translate it into a search
>> and post-filtering (as PG's queryplanne
Scan on ftsbody_tfs_idx
(cost=0.00..100.42 rows=1 width=0) (actual time=1508.998..1508.998
rows=1 loops=1)
Index Cond: (ftsbody_body_fts @@ to_tsquery('TERM1
& TERM2 & TERM3 & TERM4 & TERM5'::text))
Total runtime: 1509.109 ms
(9 rows)
Robert Haas wrote:
> On Wed, Oct 21, 2009 at 2:35 PM, Tom Lane wrote:
>> Jesper Krogh writes:
>>> What I seems to miss a way to make sure som "background" application is
>>> the one getting the penalty, so a random user doing a single insert
>>>
anslate into 180.000 documents in
my system?
What I seems to miss a way to make sure som "background" application is
the one getting the penalty, so a random user doing a single insert
won't get stuck. Is that doable?
It also seems to lock out other inserts while being in this state.
m and imposing an IO-wait load of 1 cpu.
Can I do something to prevent this from happening? Is it "by design"?
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Tom Lane wrote:
> Jesper Krogh writes:
>> "commonterm" matches 37K of the 50K documents (majority), but the query
>> plan is "odd" in my eyes.
>
>> * Why does it mis-guess the cost of a Seq Scan on textbody so much?
>
> The cost looks about
untime: 47.634 ms
(9 rows)
To me it seems like the query planner could do a better job?
On "rare" terms everything seems to work excellent.
N.B.: looks a lot like this:
http://archives.postgresql.org/pgsql-performance/2009-07/msg00190.php
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tter?
>>
>> show us explain from the query(s).
> use select * from pg_stat_activity to find out the state query is in, and
> perhaps which one of the queries it really is.
I'm actively monitoring pg_stat_activity for potential problems but the
thread is spending most of the
pen (with a lot of changes hold in it) has an
impact on the general performance. Even without touching the same records.
>> What can I do to make the system handle other queries better?
>
> Really kinda depends on what your transaction is doing.
insert's, updates, delete..
--
Je
pu-usage of all queries continue to rise. iowait numbers are
also very low.
What can I do to make the system handle other queries better?
PG: 8.2
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgres
o me
that IOZone only has a win32 client. How did you actually run IOZone on
Linux?
$ apt-cache search iozone
iozone3 - Filesystem and Disk Benchmarking Tool
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscrip
art_backup/pg_stop_backup" as specified here:
http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html
=> Making a Base backup.
??
It worked when I tested it, but I may just have been darn lucky.
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
Craig Ringer wrote:
Jesper Krogh wrote:
Hi.
I have this "message queue" table.. currently with 8m+ records.
Picking the top priority messages seem to take quite long.. it is just
a matter of searching the index.. (just as explain analyze tells me it
does).
Can anyone dige
rabbed_until <=
1208442668) AND ("coalesce" = 'Efam'::text))
Total runtime: 245.330 ms
(5 rows)
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
8,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121
090,1121074,688659,688650}'::integer[]))
-> Index Scan using ecn_ref_idx on number eumbers
(cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795
rows=0 loops=389)
Index Cond: (numbers.reference_id = me.id)
Total runtime: 2287.701 ms
(10 rows)
.. subsequent run: 32.367ms
On a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached.
Jesper
--
Jesper Krogh
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
veral GB is not a
problem here.
Jesper
--
Jesper
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Scott Marlowe wrote:
On Fri, Mar 14, 2008 at 12:17 AM, Jesper Krogh <[EMAIL PROTECTED]> wrote:
Scott Marlowe wrote:
> On Thu, Mar 13, 2008 at 3:09 PM, justin <[EMAIL PROTECTED]> wrote:
>
>> I chose to use ext3 on these partition
>
> You should really consi
you can speed up reloads by increasing your checkpoint segments to a big
> number like 256 and the checkpoint timeout to something like 10 minutes.
> All other normal tuning parameters should be what you plan
> to use for your normal operations, too.
Thanks.
Jesper
--
Jesper Krogh
-
1 - 100 of 108 matches
Mail list logo