> Claus Guttesen wrote:
>
> http://docs.freebsd.org/cgi/getmsg.cgi?fetch=13001+0+current/freebsd-performance
>
> Not being particularly passionate about any OS, I've been intrigued by
> the FreeBSD benchmarks. However, management is reluctant to use boxes
> which don
> However, I have certainly seen some inefficiencies with Linux and large use
> of shared memory -- and I wouldn't be surprised if these problems don't
> exist on FreeBSD or OpenSolaris.
This came on the freebsd-performance-list a few days ago.
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=13001+0
> Hi everyone,
> What is the best Linux flavor for server which runs postgres alone.
> The postgres must handle greater number of database around 200+. Performance
> on speed is the vital factor.
> Is it FreeBSD, CentOS, Fedora, Redhat xxx??
As others mention FreeBSD is somewhat different fr
> I have a big performance problem in my SQL select query:
>
>
> select * from event where user_id in
> (500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,4
> I have a transaction running at the database for around 20 hours .. still
> isn't done. But during the last hours it has come to the point where it
> really hurts performance of "other queries".
>
> Given pg_stat_activity output there seems to be no locks interfering but
> the overall cpu-usage o
>> > max_fsm_pages = 280
>> > max_fsm_relations = 16
>
>> What does the last couple of lines from a 'vacuum analyze verbose'
>> say? I have max_fsm_pages = 400 and max_fsm_relations = 1500.
>
>> You can also try to lower random_page_cost to a lower value like 1.2
>> but I doubt this wil
> The execution time has not improved. I am going to increase the
> shared_buffers now keeping the work_mem same.
Have you performed a vacuum analyze?
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentler gamester is the soonest winner.
Shakespeare
--
Sent via pgsql-perfor
What is random_page_cost set to? You could try to lower it to 1.5 if set higher.
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentler gamester is the soonest winner.
Shakespeare
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
>> > Query:
>> >
>> > select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,
>> > jena_g1t1_stmt A2 Where
>> > A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND
>> >
>> > A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'
>> > AND A0.GraphI
> I am doing a performance comparison between running Jena with MySQL and
> Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I
> have run several queries to both MySQL and Postgres and all of them took
> similar amount of time to execute except one. For the following query t
> I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
> Xeon(R) CPU 3065 @ 2.33GHz, 2GB RAM and Seagate Technology -
> Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
>
> I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
> $ pgbench -i pgbench -s 50 -U pgsql
>
> [pg...@
> Should I use gjournal on FreeBSD 7? Or just soft updates?
>
> Here is my opinion: I suspect that gjournal would be much slower than
> soft updates. Also gjournal is relatively new code, not very well
> tested. But gjournal is better when the system crashes. Although I have
> heard that sometimes
> Considering a quad core server processor, 4 GBs of RAM memory, disk Sata
> 2.
>
> What is the recommended setting for the parameters:
>
> max_connections:70
Depends on how many clients that access the database.
> shared_buffers?
I have mine at 512 MB but I will lower it and see how it affects
> I have taken over the maintenance of a server farm , recently. 2 webserver
> on db server. They are quite powerful 2 processor xeon w/ 6Gig of ram .
>
> Couple of days ago we had a serious performance hit and the db server (pg.
> v7.4) was overloaded w/ something in a way that operating system w
> After setting log_statement='all' at postgres.conf,
> then i'm rebooting OS [freeBSD or CentOS],
> i can't find where log file created from log_statement='all' located...
> FYI, location of postgres.conf at /var/lib/pgsql/data/postgres.conf
>
> many thanks..
I added the following to FreeBSD:
/e
>> If you have a good RAID controller with BBU cache, then there's no point
>> splitting the discs into two sets. You're only creating an opportunity to
>> under-utilise the system. I'd get ten identical discs and put them in a
>> single array, probably RAID 10.
>
> OK, thats good to know. Really w
> We have a database with lots of small simultaneous writes and reads
> (millions every day) and are looking at buying a good hardware for this.
>
> What are your suggestions. What we are currently looking at is.
>
> Dual Quad Core Intel
> 8 - 12 GB RAM
>
> 10 disks total.
>
> 4 x 146 GB SAS disk i
> I have a table with 9,961,914 rows in it (see the describe of
> bigtab_stats_fact_tmp14 below)
>
> I also have a table with 7,785 rows in it (see the describe of xsegment_dim
> below)
>
> I'm running the join shown below and it takes > 10 hours and eventually runs
> out of disk space on a 1.4TB f
> We want to migrate from postgres 8.1.3 to postgres 8.3.1.
> Can anybody list out the installation steps to be followed for migration.
> Do we require to take care of something specially.
Perform a pg_dump, do a restore and validate your sql-queries on a test-server.
--
regards
Claus
When leni
> Can anyone who have started using 8.3.1 list out the pros and cons.
I upgraded to 8.3.1 yesterday from 8.3.0. I've used 8.3.0 since it was
released and it's working fine. I upgraded from 7.4 (dump/restore) and
it was working out of the box. We have somewhat simple sql-queries so
there was no nee
> 1) hardware
> 2) rewriting my queries and table structures
> 3) using more predefined queries
> 4) tweek parameters in the db conf files
>
> Of these points:
> 1) is nothing I can do about right now, but in the future perhaps.
> 2) will be quite hard right now since there is more code than
> I need to install a new server for postgresql 8.3. It will run two
> databases, web server and some background programs. We already have a
> server but it is becoming slow and we would like to have something that
> is faster. It is a cost sensitive application, and I would like to get
> your
> > Without knowing what a "lakhs" record is,
>
> I had the same question... and Wikipedia gave me the answer : it is an
> Indian word meaning 10^5, often used in indian english.
Thank you (both OP and this post) for enlightening us with this word.
--
regards
Claus
When lenity and cruelty pl
> > > why it doesn´t use index for the primary keys in the join conditions?
> >
> > Maby random_page_cost is set too high? What version are you using?
>
> Postgresql v. 8.2.1
You can try to lower this value. The default (in 8.3) is 4.
--
regards
Claus
When lenity and cruelty play for a kingdom,
> The following query takes about 4s to run in a 16GB ram server. Any ideas
> why it doesn´t use index for the primary keys in the join conditions?
Maby random_page_cost is set too high? What version are you using?
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentlest gamest
> There are some results here that show PostgreSQL is slower in some cases
> than Monet and MySQL. Of course these results were published immediately
> prior to 8.2 being released, plus run out-of-the-box, so without even
> basic performance tuning.
>
> Would anybody like to repeat these tests with
> I missed the initial post in this thread, but I haven't seen any 15K rpm
> 2.5" drives, so if you compare 10K rpm 2.5" drives with 15K rpm 3.5"
> drives you will see differences (depending on your workload and controller
> cache)
I have some 15K rpm 2.5" sas-drives from HP. Other vendors have th
> Which scheduler is recommended for a box that is dedicated to running
> postgres?
>
> I've asked google and found no answers.
Is it the OS itself?
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.
Shakespeare
--
> Thanks for the information Claus, Why would reducing the effective cache
> size help the processor usage? It seems that there is plenty of resources
> on the box although I can see that 10MB of sort space could mount up if we
> had 500 connections but at the moment we do not have anything like
> > Does anyone have any ideas what my bottle neck might be and what I can do
> > about it?
>
> Your bottleneck is that you are using a very old version of PostgreSQL. Try
> 8.2 or (if you can) the 8.3 beta series -- it scales a _lot_ better in this
> kind of situation.
You won't know until you've
> I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running
> postgres 7.4.3. This has been recompiled on the server for 64 stored
> procedure parameters, (I assume this makes postgres 64 bit but are not
> sure). When the server gets under load from database connections
> executing
> Apart from the disks, you might also investigate using Opterons instead
> of Xeons. there appears to be some significant dent in performance
> between Opteron and Xeon. Xeons appear to spend more time in passing
> around ownership of memory cache lines in case of a spinlock.
> It's not yet clear
> All of our existing servers are from Dell, but I want to look at some
> other options as well. We are currently looking at rack boxes with 8
> internal SAS discs. Two mirrored for OS, Two mirrored for WAL and 4 in
> raid 10 for the base.
>
> Here are our current alternatives:
>
> 1) Dell 2900 (5U
Hi.
We are using a HP DL 380 G5 with 4 sas-disks at 10K rpm. The
controller is a built in ciss-controller with 256 MB battery-backed
cache. It is partitioned as raid 1+0.
Our queries are mainly selects.
I will get four 72 GB sas-disks at 15K rpm. Reading the archives
suggest raid 1+0 for optimal
> > Get yourself the ability to benchmark your application. This is
> > invaluable^W a requirement for any kind of performance tuning.
> >
> I'm pretty happy with the performance of the database at this stage.
> Correct me if I'm wrong, but AFAIK a load of 3.5 on a quad is not
> overloading it. It
> I've recently run into problems with my kernel complaining that I ran
> out of memory, thus killing off postgres and bringing my app to a
> grinding halt.
>
> I'm on a 32-bit architecture with 16GB of RAM, under Gentoo Linux.
> Naturally, I have to set my shmmax to 2GB because the kernel can't
>
The iostat -c says about 8% of time waiting for IO. I'm afraid this
is due to locks between concurrent queries, is there anyway to have more
info about?
I do believe that if you told what OS you're running, what pg-version
you're running, what type of sql-statements you perform the list can
At our institute we are currently establishing a small GIS working group.
The data storage for vector data should be the central PostGIS system.
Raster data will be held in file system.
Mostly the users are accessing the data base in read only mode. From the
client side there is not much write acc
> In #postgresql on freenode, somebody ever mentioned that ZFS from Solaris
> helps a lot to the performance of pgsql, so dose anyone have information
> about that?
the filesystem you use will affect the performance of postgres
significantly. I've heard a lot of claims for ZFS, unfortunantly many
I am about to order a new server for my Postgres cluster. I will
probably get a Dual Xeon Quad Core instead of my current Dual Xeon.
Which OS would you recommend to optimize Postgres behaviour (i/o
access, multithreading, etc) ?
I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone
Hi all,
take a look at those plans:
Try changing random_page_cost from the default 4 to 2 in postgresql.conf:
random_page_cost = 2
The default in postgresql is somewhat conservative. This setting
indicates for postgresql how fast your disks are, the lower the
faster.
Could this setting be cha
> I'm curious, what problem does the disclaimer cause?
>
> I wrote the following TOS for my personal system:
> https://www.potentialtech.com/cms/node/9
> Excerpt of the relevant part:
> I have no idea if that's legally binding or not, but I've talked to a few
> associates who have some experience
I recall a reference on the list indicating that newer Xeon processors
don't suffer from the context switching problem reported last year.
In searching the archives, I can't find any specific info indentifying
which Xeon processors don't have this problem.
Anyone point me to a reference?
We re
Approx. 200 reqest a sec. should be a problem unless the queries are heavy.
Thanks Claus thats good news!
I'm having a reputable vendor build the box and test it for me before
delivering. The bottom line of your message, did you mean 'should be not a
problem'? I wonder what the main reason for
I am about to pull the trigger on a new machine after analyzing some
discussions I posted here last year. I've been trying to spec out a reliable
and powerfull enough machine where I won't have to replace it for some time.
Currently I've been using a dual Xeon 3.06ghz with 4GB of ram and utilizin
I have a simple query which uses 32ms on 7.4.14 and 1015ms on 8.2.0.
I guess 7.4.14 creates a better execution plan than 8.2.0 for this query but
i don't know how to get it to select a better one.
Explain analyse output will be found near the end of the e-mail.
Explain analyze is run several time
We're planning new server or two for PostgreSQL and I'm wondering Intel
Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?
When I look through hardware sites Core 2 wins. But I believe those tests
mostly are being done in 32 bits. Does the picture change in 64 bits?
We
I am looking at setting up two general-purpose database servers,
replicated with Slony. Each server I'm looking at has the following
specs:
Dell PowerEdge 2950
- 2 x Dual Core Intel(r) Xeon(r) 5130, 4MB Cache, 2.00GHz, 1333MHZ FSB
- 4GB RAM
- PERC 5/i, x6 Backplane, Integrated Controller Card (25
As I have understood, there is alot of tuning using both postgres.conf and
analyzing queries to make the values of postgres.conf fit my needs, system
and hardware. This is where I need some help. I have looked into
postgres.conf , and seen the tunings. But I'm still not sure what I should
put into
> 4. Are there any other settings in the conf file I could try to tweak?
One more thing :-)
I stumbled over this setting, this made the db (PG 7.4.9) make use of
the index rather than doing a sequential scan and it reduced a query
from several minutes to some 20 seconds.
random_page_cost = 2 (or
> Here is my current configuration:
>
> Dual Xeon 3.06Ghz 4GB RAM
> Adaptec 2200S 48MB cache & 4 disks configured in RAID5
> FreeBSD 4.11 w/kernel options:
> options SHMMAXPGS=65536
> options SEMMNI=256
> options SEMMNS=512
> options SEMUME=256
> options SEMM
Hi.
Has anybody tried the new Sun "cool-thread" servers t1000/t2000 from
Sun? I'd love to see benchmarks with Solaris 10 and pg 8.1.
regards
Claus
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
How should i set this configuration? Depending on the memory?
And then is it necessary to perform a benchmarking test?I've set it to 'shared_buffers = 12288' with 8 GB RAM on postgresql 7.4.9, FreeBSD 6.0. There is no exact size, depends on type of workload, server-OS etc. A
> Typical query
>
>
> SELECT n.name
> FROM node n
> WHERE n.name
> LIKE '56x%'
> AND n.type='H'
> AND n.usage='TEST'
> AND n.node_id
> NOT IN
> (select n.node_id
> FROM job_log j
> INNER JOIN node n
> ON j.node_id = n.node_id
> WHERE n.name
> LIKE '56x%'
> AND n.type='H'
> AND n.usage=
> I forgot to give our non default postgresql.conf parameters:
> shared_buffers = 28800
> sort_mem = 32768
> vacuum_mem = 32768
> max_fsm_pages = 35
> max_fsm_relations = 2000
> checkpoint_segments = 16
> effective_cache_size = 27
> random_page_cost = 2
Isn't sort_mem quite high? Remember
> at 5TB data, i'd vote that the application is disk I/O bound, and the
> difference in CPU speed at the level of dual opteron vs. dual-core
> opteron is not gonna be noticed.
>
> to maximize disk, try getting a dedicated high-end disk system like
> nstor or netapp file servers hooked up to fiber c
> Does anyone have recommendations for hardware and/or OS to work with around
> 5TB datasets?
Hardware-wise I'd say dual core opterons. One dual-core-opteron
performs better than two single-core at the same speed. Tyan makes
some boards that have four sockets, thereby giving you 8 cpu's (if you
ne
> We are running some performance tests in which we are attempting to
> insert about 100,000,000 rows in a database at a sustained rate. About
> 50M rows in, our performance drops dramatically.
>
> This test is with data that we believe to be close to what we will
> encounter in production. Howev
> > > Apparently this formula is no longer relevant on the FreeBSD systems as
> > > it can cache up to almost all the available RAM. With 4GB of RAM, one
> > > could specify most of the RAM as being available for caching, assuming
> > > that nothing but PostgreSQL runs on the server -- certainly 1/
> > I have a postgresql 7.4.8-server with 4 GB ram.
> > #effective_cache_size = 1000# typically 8KB each
> >
> > This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
> > changed it to:
> >
> > effective_cache_size = 27462# typically 8KB each
>
> Apparently this formula is
I have a postgresql 7.4.8-server with 4 GB ram.
> #max_fsm_pages = 2 # min max_fsm_relations*16, 6 bytes each
> #max_fsm_relations = 1000 # min 100, ~50 bytes each
If you do a vacuum verbose (when it's convenient) the last couple of
lines will tell you something like this:
INF
> I have a SUSE 9 box that is running Postgres 8.0.1 compiled from source.
> Over time, I see the memory usage of the box go way way up (it's got
> 8GBs in it and by the end of the day, it'll be all used up) with what
> looks like cached inodes relating to the extreme IO generated by
>
> I was wond
> effective_cache_size = 100 # typically 8KB each
I have this setting on postgresql 7.4.8 on FreeBSD with 4 GB RAM:
effective_cache_size = 27462
So eventhough your machine runs Debian and you have four times as much
RAM as mine your effective_cache_size is 36 times larger. You could
try low
63 matches
Mail list logo