Hi,
you could set effective_cache_size to a high value (free memory on your
server that is used for caching).
Christiaan Willemsen wrote:
Hi there,
I configured OpenSolaris on our OpenSolaris Machine. Specs:
2x Quad 2.6 Ghz Xeon
64 GB of memory
16x 15k5 SAS
The filesystem is configured using
Scott Marlowe wrote:
On Thu, Sep 4, 2008 at 1:39 PM, Ulrich <[EMAIL PROTECTED]> wrote:
I wouldn't set shared_buffers that high
just because things like vacuum and sorts need memory too
Okay, I understand that vacuum uses memory, but I thought sorts are done in
work_mem
f one query which will never
return more than 500 rows.
-Ulrich
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
94 seconds, 30.1 MB/s
That is really really slow (10 times slower than on my other machine).
What would you do now? Increasing shared_buffers to 100MB and setting
effective_cache_size to 0MB? Or increasing effective_cache_size, too?
Thanks for help.
Regards,
-Ulrich
--
Sent via pgsql-performan
nly running a
very small server with 256MB RAM and the webserver also likes to use
some RAM.
Does Postgre cache the HASH-Table for later use? For example when the
user reloads the website.
Kind regards
Ulrich
Rusty Conover wrote:
This is what I've found with tables ranging in the million
rows=12 loops=1)
If I set Offset to "3" and LIMIT=10 it is
Sort (cost=113.73..113.75 rows=8 width=5) (actual
time=0.321..0.328 rows=13 loops=1)
It looks like if this "row" is something like min(max_rows=13,
LIMIT+OFFSET). But I do not completely understand the Synt
me result, so I will use #1 and count(*) takes
just 0.478ms if I use query #1.
Kind Regards,
Ulrich
Tom Lane wrote:
Ulrich <[EMAIL PROTECTED]> writes:
People say that [EXISTS is faster]
People who say that are not reliable authorities, at least as far as
Postgres is concerned.
SELECT speed"
with "SELECT count(*)" and remove the LIMIT and OFFSET. Is this good? I
have read that count(*) is slow.
Kind regards
Ulrich
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ase now is 12Gb, but searching with the web interface has a
maximum of 5 seconds (most searches are faster). The one disadvantage is
the backup (I use pg_dump once a week which needs about 10 hours). But
for now, this is acceptable for me. But I want to look at slony or port
everything to a linux machine.
Ulrich
ase now is 12Gb, but searching with the web interface has a
maximum of 5 seconds (most searches are faster). The one disadvantage is
the backup (I use pg_dump once a week which needs about 10 hours). But
for now, this is acceptable for me. But I want to look at slony or po
Hello all,
had an idea of optimizing a query that may work generally.
In case a 'column' is indexed, following two alterations could be done
I think:
A)
select ... where column ~ '^Foo' --> Seq Scan
into that:
select ... where column BETWEEN 'Foo' AND 'FooZ' --> Index Scan
of co
ster to me.
/Ulrich
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
gram which executes advanced query interface
call to the server.
How would that improve performance?
Ulrich
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
re there better ways to do it? Is
there some literature you recommend reading?
TIA
Ulrich
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
nswer. But all other queries
with less data (at the same time) still have to be fast.
I can not stop users doing that kind of reporting. :(
I need more speed in orders of magnitude. Will more disks / more memory
do that trick?
Money is of course a limiting factor but it doesn't have
lease find postgresql.conf below.
Ulrich
#---
# RESOURCE USAGE (except WAL)
#---
# - Memory -
shared_buffers = 2 # min 16, at least max_con
some orders of
magnitude. I already thought of a box with the whole database on a ram
disc. So really any idea is welcome.
Ulrich
--
Ulrich Wisser / System Developer
RELEVANT TRAFFIC SWEDEN AB, Riddarg 17A, SE-114 57 Sthlm, Sweden
Direct (+46)86789755 || Cell (+46)704467893 || Fax (+46
m.entrydate >= '2005-1-1 00:00'::date
and em.entrydate <= '2005-5-9 00:00'::date
and ( recordtext like '%RED%' or recordtext like '%CORVETTE%' )
order by em.entrydate
That should give you all rows containing one of the words.
Does it work?
Is is fa
Hi,
my inserts are done in one transaction, but due to some foreign key
constraints and five indexes sometimes the 100 inserts will take more
than 5 minutes.
/Ulrich
---(end of broadcast)---
TIP 6: Have you searched our list archives
Hi,
is there anything I can doo to speed up inserts? One of my tables gets
about 100 new rows every five minutes. And somehow the inserts tend to
take more and more time.
Any suggestions welcome.
TIA
Ulrich
---(end of broadcast)---
TIP 9: the
se to see why they are long running.
But is there a tool that could compile a summary out of the log? The log
grows awefully big after a short time.
Thanks
/Ulrich
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index
21 matches
Mail list logo