[PERFORM] Slow Postgresql server

2007-04-11 Thread Jason Lustig

Hello all,

My website has been having issues with our new Linux/PostgreSQL  
server being somewhat slow. I have done tests using Apache Benchmark  
and for pages that do not connect to Postgres, the speeds are much  
faster (334 requests/second v. 1-2 requests/second), so it seems that  
Postgres is what's causing the problem and not Apache. I did some  
reserach, and it seems that the bottleneck is in fact the hard  
drives! Here's an excerpt from vmstat:


procs ---memory-- ---swap-- -io --system--  
-cpu--
r  b   swpd   free   buff  cache   si   sobibo   incs us  
sy id wa st
1  1140  24780 166636 57514400 0  3900 1462  3299  1   
4 49 48  0
0  1140  24780 166636 57514400 0  3828 1455  3391  0   
4 48 48  0
1  1140  24780 166636 57514400 0  2440  960  2033  0   
3 48 48  0
0  1140  24780 166636 57514400 0  2552 1001  2131  0   
2 50 49  0
0  1140  24780 166636 57514400 0  3188 1233  2755  0   
3 49 48  0
0  1140  24780 166636 57514400 0  2048  868  1812  0   
2 49 49  0
0  1140  24780 166636 57514400 0  2720 1094  2386  0   
3 49 49  0


As you can see, almost 50% of the CPU is waiting on I/O. This doesn't  
seem like it should be happening, however, since we are using a RAID  
1 setup (160+160). We have 1GB ram, and have upped shared_buffers to  
13000 and work_mem to 8096. What would cause the computer to only use  
such a small percentage of the CPU, with more than half of it waiting  
on I/O requests?


Thanks a lot
Jason


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Slow Postgresql server

2007-04-12 Thread Jason Lustig

Hi all,

Wow! That's a lot to respond to. Let me go through some of the  
ideas... First, I just turned on autovacuum, I forgot to do that. I'm  
not seeing a major impact though. Also, I know that it's not optimal  
for a dedicated server. It's not just for postgres, it's also got our  
apache server on it. We're just getting started and didn't want to  
make the major investment right now in getting the most expensive  
server we can get. Within the next year, as our traffic grows, we  
will most likely upgrade, but for now when we're in the beginning  
phases of our project, we're going to work with this server.


In terms of RAID not helping speed-wise (only making an impact in  
data integrity) - I was under the impression that even a mirrored  
disk set improves speed, because read requests can be sent to either  
of the disk controllers. Is this incorrect?


I turned on logging again, only logging queries > 5ms. and it caused  
the same problems. I think it might be an issue within the OS's  
logging facilities, since it's going through stderr.


Some of the queries are definitely making an impact on the speed. We  
are constantly trying to improve performance, and part of that is  
reassessing our indexes and denormalizing data where it would help.  
We're also doing work with memcached to cache the results of some of  
the more expensive operations.


Thanks for all your help guys - it's really fantastic to see the  
community here! I've got a lot of database experience (mostly with ms  
sql and mysql) but this is my first time doing serious work with  
postgres and it's really a great system with great people too.


Jason

On Apr 12, 2007, at 11:35 AM, Jeff Frost wrote:


On Thu, 12 Apr 2007, Jason Lustig wrote:


0 <-- BM starts here
10  0180 700436  16420  9174000 0   176  278  2923  
59 41  0 0  0
11  0180 696736  16420  9174000 0 0  254  2904  
57 43  0 0  0
12  0180 691272  16420  9174000 0 0  255  3043  
60 39  1 0  0
9  0180 690396  16420  9174000 0 0  254  3078  
63 36  2  0 0


Obviously, I've turned off logging now but I'd like to get it  
running again (without bogging down the server) so that I can  
profile the system and find out which queries I need to optimize.  
My logging settings (with unnecessary comments taken out) were:


So what did you get in the logs when you had logging turned on?  If  
you have the statement logging, perhaps it's worth running through  
pgfouine to generate a report.




log_destination = 'syslog'# Valid values are  
combinations of
redirect_stderr = off   # Enable capturing of  
stderr into log
log_min_duration_statement =  0  # -1 is disabled, 0  
logs all statements
silent_mode = on# DO NOT USE without  
syslog or

log_duration = off
log_line_prefix = 'user=%u,db=%d'   # Special  
values:

log_statement = 'none'  # none, ddl, mod, all



Perhaps you just want to log slow queries > 100ms?  But since you  
don't seem to know what queries you're running on each web page,  
I'd suggest you just turn on the following and run your benchmark  
against it, then turn it back off:


log_duration = on
log_statement = 'all'

Then go grab pgfouine and run the report against the logs to see  
what queries are chewing up all your time.


So you know, we're using Postgres 8.2.3. The database currently is  
pretty small (we're just running a testing database right now with  
a few megabytes of data). No doubt some of our queries are slow,  
but I was concerned because no matter how slow the queries were  
(at most the worst were taking a couple of msecs anyway), I was  
getting ridiculously slow responses from the server. Outside of  
logging, our only other non-default postgresql.conf items are:


shared_buffers = 13000  # min 128kB or  
max_connections*16kB

work_mem = 8096 # min 64kB

In terms of the server itself, I think that it uses software raid.  
How can I tell? Our hosting company set it up with the server so I  
guess I could ask them, but is there a program I can run which  
will tell me the information? I also ran bonnie++ and got this  
output:


Version  1.03   --Sequential Output-- --Sequential  
Input- --Random-
  -Per Chr- --Block-- -Rewrite- -Per Chr- -- 
Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % 
CP  /sec %CP
pgtest 2000M 29277  67 33819  15 15446   4 35144  62 48887   5  
152.7   0
  --Sequential Create-- Random  
Create
  -Create-- --Read--- -Delete-- -Create-- -- 
Read--- -Delete--
files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  

[PERFORM] Question about SQL performance

2007-06-04 Thread Jason Lustig
I have some questions about the performance of certain types of SQL  
statements.


What sort of speed increase is there usually with binding parameters  
(and thus preparing statements) v. straight sql with interpolated  
variables? Will Postgresql realize that the following queries are  
effectively the same (and thus re-use the query plan) or will it  
think they are different?


SELECT * FROM mytable WHERE item = 5;
SELECT * FROM mytable WHERE item = 10;

Obviously to me or you they could use the same plan. From what I  
understand (correct me if I'm wrong), if you use parameter binding -  
like "SELECT * FROM mytable WHERE item = ?" - Postgresql will know  
that the queries can re-use the query plan, but I don't know if the  
system will recognize this with above situation.


Also, what's the difference between prepared statements (using  
PREPARE and EXECUTE) and regular functions (CREATE FUNCTION)? How do  
they impact performance? From what I understand there is no exact  
parallel to stored procedures (as in MS SQL or oracle, that are  
completely precompiled) in Postgresql. At the same time, the  
documentation (and other sites as well, probably because they don't  
know what they're talking about when it comes to databases) is vague  
because  PL/pgSQL is often said to be able to write stored procedures  
but nowhere does it say that PL/pgSQL programs are precompiled.


Thanks
Jason

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] Autovacuum running out of memory

2007-10-16 Thread Jason Lustig
Whenever I turn on Autovacuum on my database, I get a ton of error  
messages like this in my Postgres log:


Oct 16 06:43:47 [2897]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 06:43:47 [2897]: [1-2]  user=,db= DETAIL:  Failed on request  
of size 524287998.


It always fails on the same request. When I turn off autovacuum, they  
go away. However, when I run VACUUM FULL manually, I don't get this  
error.


My server has 2gb of ram, and my postgres settings are:

autovacuum = on # enable autovacuum subprocess?
# 'on' requires  
stats_start_collector
# and stats_row_level to  
also be on

#autovacuum_naptime = 1min  # time between autovacuum runs
#autovacuum_vacuum_threshold = 500  # min # of tuple updates before
# vacuum
#autovacuum_analyze_threshold = 250 # min # of tuple updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2   # fraction of rel size before
# vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of rel size before
# analyze
#autovacuum_freeze_max_age = 2  # maximum XID age before  
forced vacuum

# (change requires restart)
#autovacuum_vacuum_cost_delay = -1  # default vacuum cost delay for
# autovacuum, -1 means use
# vacuum_cost_delay
autovacuum_vacuum_cost_limit = -1   # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit

shared_buffers = 2  # min 128kB or max_connections*16kB
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 5  # can be 0 or more
# (change requires restart)
# Note: increasing max_prepared_transactions costs ~600 bytes of  
shared memory

# per transaction slot, plus lock space (see max_locks_per_transaction).
work_mem = 4096 # min 64kB
maintenance_work_mem = 500MB# min 1MB
#max_stack_depth = 2MB  # min 100kB


Any ideas as to what might be going on?

Thanks
Jason

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Autovacuum running out of memory

2007-10-16 Thread Jason Lustig
There isn't any more error message than this... it simply repeats  
every minute or so, which is really quite strange. And the user & db  
is really blank in the log, I didn't trim it (if I did I would have  
replaced it with dummy values).


I'm using pg 8.2.4 on Linux 2.6.15.

Jason

--
Jason Lustig
CTO, MavenHaven Inc.
www.mavenhaven.com
Where the Community Finds Wisdom

Israel: 054-231-8476
U.S.:   716-228-8729
Skype:  jasonlustig


On Oct 16, 2007, at 7:45 AM, Richard Huxton wrote:


Not really a performance question, but...

Jason Lustig wrote:
Whenever I turn on Autovacuum on my database, I get a ton of error  
messages like this in my Postgres log:

Oct 16 06:43:47 [2897]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 06:43:47 [2897]: [1-2]  user=,db= DETAIL:  Failed on  
request of size 524287998.
It always fails on the same request. When I turn off autovacuum,  
they go away. However, when I run VACUUM FULL manually, I don't  
get this error.


Is there nothing before this giving the error message some context?
Is the user and database really blank, or have you just trimmed those?
What version of PG is this, and running on what O.S.?

--
  Richard Huxton
  Archonet Ltd




Re: [PERFORM] Autovacuum running out of memory

2007-10-16 Thread Jason Lustig
I lowered the maintenance_work_mem to 50MB and am still getting the  
same errors:


Oct 16 09:26:57 [16402]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 09:26:57 [16402]: [1-2]  user=,db= DETAIL:  Failed on request  
of size 52428798.

Oct 16 09:27:57 [16421]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 09:27:57 [16421]: [1-2]  user=,db= DETAIL:  Failed on request  
of size 52428798.

Oct 16 09:29:44 [16500]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 09:29:44 [16500]: [1-2]  user=,db= DETAIL:  Failed on request  
of size 52428798.


Looking at my free memory (from TOP) I find

Mem:   2062364k total,  1846696k used,   215668k free,   223324k buffers
Swap:  2104496k total,  160k used,  2104336k free,   928216k cached

So I don't think that I'm running out of memory total... it seems  
like it's continually trying to do it. Is there a reason why Postgres  
would be doing something without a username or database? Or is that  
just how autovacuum works?


Thanks,
Jason

--
Jason Lustig
Israel: 054-231-8476
U.S.:   716-228-8729
Skype:  jasonlustig


On Oct 16, 2007, at 8:23 AM, Richard Huxton wrote:


Jason Lustig wrote:
There isn't any more error message than this... it simply repeats  
every minute or so, which is really quite strange. And the user &  
db is really blank in the log, I didn't trim it (if I did I would  
have replaced it with dummy values).


Hmm - odd that you're not getting any connection details.


I'm using pg 8.2.4 on Linux 2.6.15.


Fair enough.


Oct 16 06:43:47 [2897]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 06:43:47 [2897]: [1-2]  user=,db= DETAIL:  Failed on  
request of size 524287998.


Well, since this is about 500MB and your maintenance_work_mem is  
set to 500MB that's the obvious place to start. It might just be  
that you've not got enough free memory.


What happens if you set maintenance_work_mem to say 50MB?

--
  Richard Huxton
  Archonet Ltd

---(end of  
broadcast)---

TIP 4: Have you searched our list archives?

  http://archives.postgresql.org




Re: [PERFORM] Autovacuum running out of memory

2007-10-16 Thread Jason Lustig

I ran "ulimit -a" for the postgres user, and here's what I got:

core file size  (blocks, -c) 20
data seg size   (kbytes, -d) 20
max nice(-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 32635
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) 20
open files  (-n) 100
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 100
virtual memory  (kbytes, -v) 20
file locks  (-x) unlimited



--
Jason Lustig
Israel: 054-231-8476
U.S.:   716-228-8729
Skype:  jasonlustig


On Oct 16, 2007, at 10:01 AM, Richard Huxton wrote:


Jason Lustig wrote:
I lowered the maintenance_work_mem to 50MB and am still getting  
the same errors:

Oct 16 09:26:57 [16402]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 09:26:57 [16402]: [1-2]  user=,db= DETAIL:  Failed on  
request of size 52428798.

Oct 16 09:27:57 [16421]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 09:27:57 [16421]: [1-2]  user=,db= DETAIL:  Failed on  
request of size 52428798.

Oct 16 09:29:44 [16500]: [1-1]  user=,db= ERROR:  out of memory
Oct 16 09:29:44 [16500]: [1-2]  user=,db= DETAIL:  Failed on  
request of size 52428798.


Hmm - it's now failing on a request of 50MB, which shows it is in  
fact maintenance_work_mem that's the issue.



Looking at my free memory (from TOP) I find
Mem:   2062364k total,  1846696k used,   215668k free,   223324k  
buffers
Swap:  2104496k total,  160k used,  2104336k free,   928216k  
cached
So I don't think that I'm running out of memory total... it seems  
like it's continually trying to do it. Is there a reason why  
Postgres would be doing something without a username or database?  
Or is that just how autovacuum works?


I've not seen an error at startup before, but if it's not connected  
yet then that would make sense.


I'm guessing this is a per-user limit that the postgres user is  
hitting. If you "su" to user postgres and run "ulimit -a" that  
should show you if you have any limits defined. See "man bash" for  
more details on ulimit.



--
  Richard Huxton
  Archonet Ltd




Re: [PERFORM] Autovacuum running out of memory

2007-10-16 Thread Jason Lustig

On Oct 16, 2007, at 10:22 AM, Richard Huxton wrote:


Add some lines to /etc/security/limits.conf to increase them.


Sorry for being somewhat of a linux novice -- but what is the best  
way to do this? It doesn't seem to provide matching options from  
ulimit to the limits.conf file.


Thanks,
Jason

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly