On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each
On 10/17/2011 04:48 AM, Micka wrote:
Hi,
I've a postgres 9.1 database used for map generating ( tiles ).
The system has 24Go RAM and 5 processors.
I'm using geoserver to generate the tiles.
My data used 8486 MB => psql -d gis -c "SELECT
pg_size_pretty(pg_database_size('gis'))"
I've carefully
On 11/16/2011 04:53 PM, Tory M Blue wrote:
Linux F12 64bit
Postgres 8.4.4
16 proc / 32GB
8 disk 15KRPM SAS/Raid 5 (I know!)
shared_buffers = 6000MB
#temp_buffers = 8MB
max_prepared_transactions = 0
work_mem = 250MB
On 1/24/2012 2:16 PM, Dave Crooke wrote:
Hi folks
This could be a sheer volume issue, but I though I would ask the wisdom
of this forum as to next investigative steps.
We use PostgreSQL 8.4.4 which is bundled with our application as a
VMware virtual appliance. The bulk of the app's databa
On Tue, 2012-01-24 at 21:47 +0100, Pavel Stehule wrote:
Hello
2012/1/24 Tony Capobianco:
We are migrating our Oracle warehouse to Postgres 9.
This function responds well:
pg=# select public.getMemberAdminPrevious_sp2(247815829,
1,'test.em...@hotmail.com', 'email', 'test');
getmemberadminpr
On 1/27/2012 10:47 AM, Heikki Linnakangas wrote:
On 27.01.2012 15:34, Jayashankar K B wrote:
Hi,
We are having an embedded system with a freescale m68k architecture
based micro-controller, 256MB RAM running a customized version of
Slackware 12 linux.
It's a relatively modest Hardware.
Fascina
On 1/30/2012 3:27 AM, Saurabh wrote:
Hi all,
I am using Postgresql database for our project and doing some
performance testing. We need to insert millions of record with indexed
columns. We have 5 columns in table. I created index on integer only
then performance is good but when I created index
On 2/7/2012 4:18 AM, Ofer Israeli wrote:
Hi all,
We are currently “stuck” with a performance bottleneck in our server
using PG and we are thinking of two potential solutions which I would be
happy to hear your opinion about.
Our system has a couple of tables that hold client generated
informati
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Tuesday, February 07, 2012 4:47 PM
To: Ofer Israeli
Cc: pgsql-performance@postgresql.org; Olga Vingurt; Netta Kabala
Subject: Re: [PERFORM] Inserts or Updates
On 2/7/2012 4:18 AM, Ofer Israeli wrote:
Hi all,
We
Oh, I knew I'd seen index usage stats someplace.
give this a run:
select * from pg_stat_user_indexes where relname = 'SuperBigTable';
http://www.postgresql.org/docs/current/static/monitoring-stats.html
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
On 2/23/2012 6:34 AM, Thom Brown wrote:
On 22 February 2012 23:50, Alessandro Gagliardi wrote:
I have a database where I virtually never delete and almost never do
updates. (The updates might change in the future but for now it's okay to
assume they never happen.) As such, it seems like it migh
On 2/23/2012 12:05 PM, Shaun Thomas wrote:
On 02/23/2012 11:56 AM, Greg Spiegelberg wrote:
I know there are perils in using ctid but with the LOCK it should be
safe. This transaction took perhaps 30 minutes and removed 100k rows
and once the table was VACUUM'd afterward it freed up close to 20
On 2/23/2012 12:38 PM, Alessandro Gagliardi wrote:
Does analyze increase the efficiency of inserts or just selects? (I
assumed the latter.) Obviously, I will need to analyze sometimes, but
That depends on if you have triggers that are doing selects. But in
general you are correct, analyze wo
On 2/23/2012 2:40 PM, Alessandro Gagliardi wrote:
checkpoint_segments can help insert speed, what do you have that set to?
40. Checking
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server it looks
like setting that as high as 256 would not necessarily be unreasonable.
What do you
On 02/25/2012 06:16 PM, Stefan Keller wrote:
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are there any hints on how to tell Postgres to read in all table
contents into memory?
Yours, Stefan
How about after you load the data, vacuum freeze it, then do something like:
S
On 02/26/2012 01:11 PM, Stefan Keller wrote:
2012/2/26 Andy Colson wrote:
On 02/25/2012 06:16 PM, Stefan Keller wrote:
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are there any hints on how to tell Postgres to read in all table
contents into memory?
Yours, Stefan
How
On 03/04/2012 03:58 AM, Rory Campbell-Lange wrote:
I'd be grateful for advice on specifying the new server
providing about 230GB of usable storage, 150GB of which is on an LV
providing reconfigurable space for the databases which are served off an
XFS formatted volume.
Do you mean LVM? I've
his is the best list ever! Thanks all! (especially that poetic Dave
Fetter, and that somewhat mean, but helpful, Andy Colson)
Shout outs to my friends Garlynn, Nick and Rush (best band ever!).
Party, my house, next week!
==
(Virtually) Brian Hamlin
GeoCal
OSGeo California Chapter
415-717
On 03/16/2012 05:30 PM, Kevin Grittner wrote:
Brian Hamlin wrote:
On Mar 16, 2012, at 7:17 AM, Kevin Grittner wrote:
Andy Colson wrote:
I tried shared_buffers at both 2400M and 18000M, and it took 4.5
hours both times. ... (weak attempts at humor omitted)
Ah, I didn't pick
On 3/21/2012 6:21 AM, Rafael Martinez wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello
We are having some performance problems with an application that uses
prepared statement heavily.
We have found out that it creates-executes-destroys a prepared statement
*per* statement it sends t
On 4/15/2012 7:43 AM, Eyal Wilde wrote:
hi,
thanks a lot to all of you for your help.
(i'm sorry i did not know how to reply to a certain message)
i found that the best number of active connections is indeed 8-10. with
8-10 active connections postgresql did ~170 "account-id"s. this is still
on
On 4/18/2012 2:32 AM, Eyal Wilde wrote:
hi all,
i ran vmstat during the test :
[yb@centos08 ~]$ vmstat 1 15
procs ---memory-- ---swap-- -io --system-- -cpu-
r b swpd free buff cache si sobibo in cs us sy id wa st
2 0 0 6118620 1605
On 4/30/2012 8:49 AM, Walker, James Les wrote:
I’m trying to benchmark Postgres vs. several other databases on my
workstation. My workstation is running 64 bit Windows 7. It has 12 gb of
RAM and a W3550 @ 3 Ghz. I installed Postgres 9.1 using the windows
installer. The data directory is on a 6Gb/
On 5/1/2012 8:06 AM, Merlin Moncure wrote:
On Tue, May 1, 2012 at 7:51 AM, Walker, James Les wrote:
Exactly, if turning off fsync gives me 100 commits/sec then I know where my
bottleneck is and I can attack it. Keep in mind though that I already turned
off synchronous commit -- *really* dange
On 05/24/2012 12:26 AM, Rajesh Kumar. Mallah wrote:
- "Claudio Freire" wrote:
| From: "Claudio Freire"
| To: "Rajesh Kumar. Mallah"
| Cc: pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:23:43 AM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait an
On 6/20/2012 1:01 AM, Eyal Wilde wrote:
Hi, all.
this is an obligation from the past:
http://archives.postgresql.org/pgsql-performance/2012-05/msg00017.php
the same test, that did ~230 results, is now doing ~700 results. that
is, BTW even better than mssql.
the ultimate solution for that probl
101 - 128 of 128 matches
Mail list logo