On Fri, Dec 30, 2016 at 12:06 PM, ajmcello wrote:
> Reducing worker mem shaved about 12 minutes off the query time.. Thanks
> for the suggestion. I lowered it to 10MB instead of 100MB
>
> [SNIP]
>
> >>> [postgresql.conf]
> >>> max_connections = 10
> >>> max_files_per_process = 100
> >>> s
cello [mailto:ajmcell...@gmail.com]
> Sent: Freitag, 30. Dezember 2016 07:05
> To: Charles Clavadetscher
> Subject: Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]
>
> There are no connections except one cli when running the query. After that
> finishes then I get con
Hello
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ajmcello
> Sent: Freitag, 30. Dezember 2016 05:54
> To: POSTGRES
> Subject: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]
>
>
> I am trying
On Fri, Dec 10, 2010 at 01:55:14AM -0700, Scott Marlowe wrote:
>
> OK, so the way I do this, is to locate my data directory. On a stock
> Ubuntu machine that would be /var/lib/postgresql/8.4/main . In that
> dir is a directory called pg_xlog, what we're looking for. So, as
> root, I'd do:
>
>
On Sun, Dec 12, 2010 at 9:57 PM, Adarsh Sharma wrote:
>
> Sorry Sir, but I simply followed your steps. I think those are sufficient.
> But my server didn't start after these changes. Here are my steps :
>
> cd /hrd2-p/postgres_data
> /etc/init.d/postgresql-8.4 stop
> mkdir -p /opt/pg_xlog
> chown
Scott Marlowe wrote:
Please keep the list cc'd as there are others who might be able to
help or could use this thread for help.
On Fri, Dec 10, 2010 at 2:53 AM, Adarsh Sharma wrote:
Scott Marlowe wrote:
On Fri, Dec 10, 2010 at 12:53 AM, Adarsh Sharma
wrote:
Dear all,
I am r
Please keep the list cc'd as there are others who might be able to
help or could use this thread for help.
On Fri, Dec 10, 2010 at 2:53 AM, Adarsh Sharma wrote:
> Scott Marlowe wrote:
>>
>> On Fri, Dec 10, 2010 at 12:53 AM, Adarsh Sharma
>> wrote:
>>
>>>
>>> Dear all,
>>>
>>> I am researched a l
On Fri, Dec 10, 2010 at 12:53 AM, Adarsh Sharma
wrote:
> Dear all,
>
> I am researched a lot about Performance tuning in Pgsql.
>
> I found that we have to change shared_buffer parameter and
> effective_cache_size parameter.
> I changed shared_buffer to 2 GB but I can't able to locate
> effective_
On Thu, Feb 4, 2010 at 11:41 AM, Gauthier, Dave wrote:
> analyze? Does the empirical data the optimizer use develop good queries get
> updated with/after a restore?
Nope.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.pos
Gauthier, Dave escribió:
> analyze? Does the empirical data the optimizer use develop good queries get
> updated with/after a restore?
Not automatically, you have to invoke it manually. (In recent releases,
autovacuum would do it, but 7.1 didn't have it).
--
Alvaro Herrera
: Wang, Mary Y
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance Tuning - Any easy things that I can do ?
On Thu, Feb 4, 2010 at 8:54 AM, Wang, Mary Y wrote:
> Hi,
>
> I restored my database. However, I noticed performance is poor as compared
> to before.
> Are
On 02/04/2010 10:32 AM, Scott Marlowe wrote:
> On Thu, Feb 4, 2010 at 8:54 AM, Wang, Mary Y wrote:
>> Hi,
>>
>> I restored my database. However, I noticed performance is poor as compared
>> to before.
>> Are there some easy things that I can do to improve the performance (besides
>> rewriting t
On Thu, Feb 4, 2010 at 8:54 AM, Wang, Mary Y wrote:
> Hi,
>
> I restored my database. However, I noticed performance is poor as compared
> to before.
> Are there some easy things that I can do to improve the performance (besides
> rewriting the code)?
>
> My pgversion is 7.1.3 (I know, I know t
On 4 February 2010 15:54, Wang, Mary Y wrote:
> Hi,
>
> I restored my database. However, I noticed performance is poor as compared
> to before.
> Are there some easy things that I can do to improve the performance (besides
> rewriting the code)?
>
> My pgversion is 7.1.3 (I know, I know that I
On May 23, 2009, at 3:55 AM, Murray Richardson wrote:
Hello postgres community,
I am running postgresql 8.3 on Vista 64 and trying to do some
performance tuning to make better use of my system resources.
Anytime I make any changes to the postgresql.conf file, I cannot
connect to the serv
On Sat, 2007-05-19 at 19:19 -0400, Tom Lane wrote:
> You're comparing fields of distinct types, which not only incurs
> run-time type conversions but can interfere with the ability to
> use some plan types at all. Looking at the table definitions,
> you've got primary keys declared as SERIAL (ie,
I wrote:
> Another thing that might be worth fixing is the rather silly use of '%%%'
> rather than '%' for a no-op LIKE pattern. It looks like the planner's
> LIKE-estimator gets fooled by that and doesn't realize it's a
> match-everything pattern.
Uh, scratch that advice, I fat-fingered my test.
Robert Fitzpatrick <[EMAIL PROTECTED]> writes:
> On Sat, 2007-05-19 at 17:05 -0400, Tom Lane wrote:
>> Show us the table definitions and the EXPLAIN ANALYZE output, please.
There seem to be a couple of problems visible in the EXPLAIN output:
> -> Nested Loop (cost=53060.03..53565.7
On Sat, 2007-05-19 at 17:05 -0400, Tom Lane wrote:
> Robert Fitzpatrick <[EMAIL PROTECTED]> writes:
> > I am running the following query on a linux server with comparable
> > processor and memory as the windows server.
>
> Show us the table definitions and the EXPLAIN ANALYZE output, please.
>
T
Robert Fitzpatrick <[EMAIL PROTECTED]> writes:
> I am running the following query on a linux server with comparable
> processor and memory as the windows server.
Show us the table definitions and the EXPLAIN ANALYZE output, please.
regards, tom lane
--
The world rejoiced as [EMAIL PROTECTED] ("Darryl W. DeLao Jr.") wrote:
> Running ver 7.3.10 in RHEL 3.0 ES. If I change shared buffers, dont i have
> to change max connections as well?
If you have enough connections, then that seems unnecessary.
The *opposite* would be true; if you change max
On Fri, 2006-02-17 at 14:01, Darryl W. DeLao Jr. wrote:
> Running ver 7.3.10 in RHEL 3.0 ES. If I change shared buffers, dont i
> have to change max connections as well?
You should update to a newer version of PostgreSQL if performance is
important to you. 7.3 was ok, but 8.1 is light years ah
Running ver 7.3.10 in RHEL 3.0 ES. If I change shared buffers, dont i have to change max connections as well?
On 2/17/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Darryl W. DeLao Jr." <[EMAIL PROTECTED]> writes:
> I have max_connections set to 512, with shared buffers set to 1024. If I> set this an
"Darryl W. DeLao Jr." <[EMAIL PROTECTED]> writes:
> I have max_connections set to 512, with shared buffers set to 1024. If I
> set this any higher, postgres will not start. But, it seems that this
> setting is not enough. Though the server runs fine, certain queries for
> reports are taking anyw
On Tue, Feb 08, 2005 at 05:15:55AM -0800, sid tow wrote:
> Hi,
>
> I have to know why does copy commands work faster as compared to
> the insert commands. The thing is that i have a lot of
> constraints and triggers. I know insert will check all the
> triggers and constraints,
Does postgres actually do multiple concurrent sorts within a single
backend?
Certainly. Consider for example a merge join with each input being
sorted by an explicit sort step. DISTINCT, ORDER BY, UNION, and
related operators require their own sort steps in the current
implementation. It'
Tom Lane <[EMAIL PROTECTED]> writes:
> Paul Tillotson <[EMAIL PROTECTED]> writes:
> > Does postgres actually do multiple concurrent sorts within a single
> > backend?
>
> Certainly. Consider for example a merge join with each input being
> sorted by an explicit sort step. DISTINCT, ORDER BY, U
Tom Lane wrote:
> Paul Tillotson <[EMAIL PROTECTED]> writes:
>> Does postgres actually do multiple concurrent sorts within a single
>> backend?
>
> Certainly. Consider for example a merge join with each input being
> sorted by an explicit sort step. DISTINCT, ORDER BY, UNION, and
> related opera
But isn't the problem when the planner screws up and not the sortmem setting?
There was my case where the 7.4 planner estimated 1500 distinct rows when
there were actually 1391110. On 7.3.4 it used about 4.4MB. Whereas 7.4
definitely used more than 400MB for the same query ) - I had to kill
post
On Tue, 07 Dec 2004 07:50:44 -0500, P.J. Josh Rovero
<[EMAIL PROTECTED]> wrote:
> There are many reports of kernel problems with memory allocation
> (too agressive) and swap issues with RHEL 3.0 on both RAID
> and non-RAID systems. I hope folks have worked through all
> those issues before blaming
There are many reports of kernel problems with memory allocation
(too agressive) and swap issues with RHEL 3.0 on both RAID
and non-RAID systems. I hope folks have worked through all
those issues before blaming postgresql.
Tom Lane wrote:
If I thought that a 200% error in memory usage were cause f
Neil Conway <[EMAIL PROTECTED]> writes:
> As a quick hack, what about throwing away the constructed hash table and
> switching to hashing for sorting if we exceed sort_mem by a significant
> factor? (say, 200%) We might also want to print a warning message to the
> logs.
If I thought that a 200% e
On Mon, 2004-12-06 at 23:55 -0500, Tom Lane wrote:
> Bear in mind that the price of honoring sort_mem carefully is
> considerably far from zero.
I'll do some thinking about disk-based spilling for hashed aggregation
for 8.1
> The issue with the hash code is that it sets size parameters on the
> b
Paul Tillotson <[EMAIL PROTECTED]> writes:
> Does postgres actually do multiple concurrent sorts within a single
> backend?
Certainly. Consider for example a merge join with each input being
sorted by an explicit sort step. DISTINCT, ORDER BY, UNION, and related
operators require their own sort
Neil Conway <[EMAIL PROTECTED]> writes:
> On Mon, 2004-12-06 at 22:19 -0300, Alvaro Herrera wrote:
>> AFAIK this is indeed the case with hashed aggregation, which uses the
>> sort_mem (work_mem) parameter to control its operation, but for which it
>> is not a hard limit.
> Hmmm -- I knew we didn't
Alvaro Herrera wrote:
On Tue, Dec 07, 2004 at 12:02:13PM +1100, Neil Conway wrote:
On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
I seem to remember hearing that the memory limit on certain operations,
such as sorts, is not "enforced" (may the hackers correct me if I am
wrong);
On Mon, 2004-12-06 at 22:19 -0300, Alvaro Herrera wrote:
> AFAIK this is indeed the case with hashed aggregation, which uses the
> sort_mem (work_mem) parameter to control its operation, but for which it
> is not a hard limit.
Hmmm -- I knew we didn't implement disk-spilling for hashed aggregation
On Tue, Dec 07, 2004 at 12:02:13PM +1100, Neil Conway wrote:
> On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
> > I seem to remember hearing that the memory limit on certain operations,
> > such as sorts, is not "enforced" (may the hackers correct me if I am
> > wrong); rather, the plan
On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
> I seem to remember hearing that the memory limit on certain operations,
> such as sorts, is not "enforced" (may the hackers correct me if I am
> wrong); rather, the planner estimates how much a sort might take by
> looking at the statist
... under the periods of heavy swapping, one or more of the postgres
processes would be way up there (between 500MB and 1000MB (which would
easily explain the swapping)) ... the question is: why aren't all of the
processes sharing the same pool of shared memory since I thought that's what
I'm doin
David Esposito
> Cc: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3
>
> On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:
> > shared_buffers = 131072 (roughly 1GB)
> > max_fsm_relations = 1
> >
On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:
> According to Bruce Momjian's performance tuning guide, he recommends roughly
> half the amount of physical RAM for the shared_buffers ...
Does he? The guide I've seen from him AFAIR states that you should
allocate around 10% of ph
"David Esposito" <[EMAIL PROTECTED]> writes:
> New Box:
> shared_buffers = 131072 (roughly 1GB)
This setting is an order of magnitude too large. There is hardly any
evidence that it's worth setting shared_buffers much above 1.
regards, tom lane
On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:
> shared_buffers = 131072 (roughly 1GB)
> max_fsm_relations = 1
> max_fsm_pages = 1000
> sort_mem = 4096
> vacuum_mem = 262144
> Roughly 25 - 30 connections open (mos
> Another factor, not under our control, is that if the shared memory
> region gets too large the kernel may decide to swap out portions of
> it that haven't been touched lately. This of course is completely
> counterproductive, especially if what gets swapped is a dirty buffer,
> which'll event
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> In my testing with *particluar* environment (Linux kernel 2.2.x,
> pgbench), it was indicated that too many shared buffers reduced the
> performance even though there was lots of memory, say 1GB. I'm not
> sure why, but I suspect there is a siginificant o
> Christian Bucanac <[EMAIL PROTECTED]> writes:
> >> I am going to try 768M (98304) for buffers and 6144 (6144 * 32 = 192M)
> >> for sort mem. This way with the DB server serving a max of 32 application
> >> servers the kernel and other processes should still have the last 64Mb RAM.
>
> This is a
Hi Adam,
There are a few links to benchmark-type things you might find useful at
:
http://techdocs.postgresql.org/oresources.php#benchmark
Hope they're useful.
:-)
Regards and best wishes,
Justin Clift
Adam Manock wrote:
>
> >This is almost certainly a lousy idea. You do *not* want to ch
>This is almost certainly a lousy idea. You do *not* want to chew up all
>available memory for PG shared buffers; you should leave a good deal of
>space for kernel-level disk buffers.
I decided to start high on buffers because of Bruce's:
http://www.ca.postgresql.org/docs/hw_performan
Christian Bucanac <[EMAIL PROTECTED]> writes:
>> I am going to try 768M (98304) for buffers and 6144 (6144 * 32 = 192M)
>> for sort mem. This way with the DB server serving a max of 32 application
>> servers the kernel and other processes should still have the last 64Mb RAM.
This is almost certai
Sure, here it comes.
/Buckis
-Original Message-
From: Adam Manock [mailto:[EMAIL PROTECTED]]
Sent: den 11 juli 2001 16:26
To: Christian Bucanac
Subject: RE: [GENERAL] Performance tuning for linux, 1GB RAM, dual CPU?
We should move this discussion back to the list... others may
On Tue, Jul 10, 2001 at 07:44:34AM -0400, Adam Manock wrote:
: Hi,
:
: I am about to put a 7.1.2 server into production on RedHat 7.1
: The server will be dedicated to PostgreSQL, running a bare minimum of
: additional services.
: If anyone has already tuned the configurable parameters on a dual
Am Dienstag, 10. Juli 2001 13:44 schrieb Adam Manock:
> Hi,
>
> I am about to put a 7.1.2 server into production on RedHat 7.1
> The server will be dedicated to PostgreSQL, running a bare minimum of
> additional services.
> If anyone has already tuned the configurable parameters on a dual PIII w/
On 14 Jun 2001, denis wrote:
> I use a linux/mandrake 7.2 on PIII 350
> when doing
> 1 - create an initialisation file
> i=0;
> loadfile="/usr/local/pgsql/param/loadfile"
> rm -fr $loadfile ;
> #creating a file with 1500 records
> while [ $i -lt 1500 ] ; do
> i=`expr $i +
> As for the drive in that machine, doing inserts on it was SLOW.
Slower
> even than on our beater development machine. I suppose I could have
fiddled
> with hdparm to increase the disk I/O, but that would have been a
temporary
> fix at best. Our CGI applications were eating lots of CPU time,
55 matches
Mail list logo