Re: [PERFORM] Low Performance for big hospital server ..

2005-01-02 Thread Mark Kirkwood
[EMAIL PROTECTED] wrote:
I try to adjust my server for a couple of weeks with some sucess but it still
slow when the server has stress in the moring from many connection . I used
postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.
Since 1 1/2 yr. when I started to use the database server after optimizing the
postgresql.conf everything went fine until a couple of weeks ago , my database
grew up to 3.5 Gb and there were more than 160 concurent connections.
The server seemed to be slower in the rush hour peroid than before . There
is some swap process too. My top and meminfo are shown here below:
 

You might just be running low on ram - your sort_mem setting means that
160 connections need about 3.1G. Add to that the 256M for your
shared_buffers and there may not be much left for the os to use
effectively (this could explain the fact that some swap is being used).
Is reducing sort_mem an option ?
regards
Mark
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-02 Thread Michael Adler
On Sun, Jan 02, 2005 at 09:54:32AM +0700, [EMAIL PROTECTED] wrote:
> postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.

You may want to try disabling hyperthreading, if you don't mind
rebooting. 

> grew up to 3.5 Gb and there were more than 160 concurent connections.

Looks like your growing dataset won't fit in your OS disk cache any
longer. Isolate your most problematic queries and check out their
query plans. I bet you have some sequential scans that used to read
from cache but now need to read the disk. An index may help you. 

More RAM wouldn't hurt. =)

 -Mike Adler

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-02 Thread amrit
> > postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4
> Gb.
>
> You may want to try disabling hyperthreading, if you don't mind
> rebooting.

Can you give me an idea why should I use the SMP kernel instead of Bigmen kernel
[turn off the hyperthreading]? Will it be better to turn off ?

> > grew up to 3.5 Gb and there were more than 160 concurent connections.
>
> Looks like your growing dataset won't fit in your OS disk cache any
> longer. Isolate your most problematic queries and check out their
> query plans. I bet you have some sequential scans that used to read
> from cache but now need to read the disk. An index may help you.
>
> More RAM wouldn't hurt. =)

I think so that there may be some query load on our programe and I try to locate
it.
But if I reduce the config to :
max_connections = 160
shared_buffers =  2048[Total = 2.5 Gb.]
sort_mem  = 8192   [Total = 1280 Mb.]
vacuum_mem = 16384
effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
Will it be more suitable for my server than before?

Thanks for all comment.
Amrit
Thailand


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-02 Thread Dave Cramer
The common wisdom of shared buffers is around 6-10% of available memory. 
Your proposal below is about 50% of memory.

I'm not sure what the original numbers actually meant, they are quite large.
also effective cache is the sum of kernel buffers + shared_buffers so it 
should be bigger than shared buffers.

Also turning hyperthreading off may help, it is unlikely it is doing any 
good unless you are running a relatively new (2.6.x) kernel.

I presume you are vacuuming on a regular basis?
[EMAIL PROTECTED] wrote:
postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4
 

Gb.
You may want to try disabling hyperthreading, if you don't mind
rebooting.
   

Can you give me an idea why should I use the SMP kernel instead of Bigmen 
kernel
[turn off the hyperthreading]? Will it be better to turn off ?
 

grew up to 3.5 Gb and there were more than 160 concurent connections.
 

Looks like your growing dataset won't fit in your OS disk cache any
longer. Isolate your most problematic queries and check out their
query plans. I bet you have some sequential scans that used to read
from cache but now need to read the disk. An index may help you.
More RAM wouldn't hurt. =)
   

I think so that there may be some query load on our programe and I try to 
locate
it.
But if I reduce the config to :
max_connections = 160
shared_buffers =  2048[Total = 2.5 Gb.]
sort_mem  = 8192   [Total = 1280 Mb.]
vacuum_mem = 16384
effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
Will it be more suitable for my server than before?
Thanks for all comment.
Amrit
Thailand
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 

--
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-02 Thread amrit
> The common wisdom of shared buffers is around 6-10% of available memory.
> Your proposal below is about 50% of memory.
>
> I'm not sure what the original numbers actually meant, they are quite large.
>
I will try to reduce shared buffer to 1536 [1.87 Mb].

> also effective cache is the sum of kernel buffers + shared_buffers so it
> should be bigger than shared buffers.
also make the effective cache to 2097152 [2 Gb].
I will give you the result , because tomorrow [4/12/05] will be the official day
of my hospital [which have more than 1700 OPD patient/day].


> Also turning hyperthreading off may help, it is unlikely it is doing any
> good unless you are running a relatively new (2.6.x) kernel.
Why , could you give me the reason?

> I presume you are vacuuming on a regular basis?
Yes , vacuumdb daily.



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-02 Thread Mark Kirkwood
[EMAIL PROTECTED] wrote:
max_connections = 160
shared_buffers =  2048[Total = 2.5 Gb.]
sort_mem  = 8192   [Total = 1280 Mb.]
vacuum_mem = 16384
effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
Will it be more suitable for my server than before?
 

I would keep shared_buffers in the 1->2 range, as this is
allocated *once* into shared memory, so only uses 80->160 Mb in *total*.
The lower sort_mem will help reduce memory pressure (as this is
allocated for every backend connection) and this will help performance -
*unless* you have lots of queries that need to sort large datasets. If
so, then these will hammer your i/o subsystem, possibly canceling any
gain from freeing up more memory. So there is a need to understand what
sort of workload you have!
best wishes
Mark
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-02 Thread amrit
> >max_connections = 160
> >shared_buffers =  2048 [Total = 2.5 Gb.]
> >sort_mem  = 8192   [Total = 1280 Mb.]
> >vacuum_mem = 16384
> >effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
> >Will it be more suitable for my server than before?
> >
> >
> >
> >
> I would keep shared_buffers in the 1->2 range, as this is
> allocated *once* into shared memory, so only uses 80->160 Mb in *total*.

You mean that if I increase the share buffer to arround 12000 [160 comnnections
] , this will not affect the mem. usage ?

> The lower sort_mem will help reduce memory pressure (as this is
> allocated for every backend connection) and this will help performance -
> *unless* you have lots of queries that need to sort large datasets. If
> so, then these will hammer your i/o subsystem, possibly canceling any
> gain from freeing up more memory. So there is a need to understand what
> sort of workload you have!

Will the increasing in effective cache size to arround 20 make a little bit
improvement ? Do you think so?

Any comment please , thanks.
Amrit
Thailand


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Low Performance for big hospital server ..

2005-01-02 Thread Mark Kirkwood
[EMAIL PROTECTED] wrote:
max_connections = 160
shared_buffers =  2048[Total = 2.5 Gb.]
sort_mem  = 8192   [Total = 1280 Mb.]
vacuum_mem = 16384
effective_cache_size  = 128897 [= 1007 Mb. = 1 Gb.  ]
Will it be more suitable for my server than before?

 

I would keep shared_buffers in the 1->2 range, as this is
allocated *once* into shared memory, so only uses 80->160 Mb in *total*.
   

You mean that if I increase the share buffer to arround 12000 [160 
comnnections
] , this will not affect the mem. usage ?
 

shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is
shared, so no matter how many connections you have it will only use 96M.

The lower sort_mem will help reduce memory pressure (as this is
allocated for every backend connection) and this will help performance -
*unless* you have lots of queries that need to sort large datasets. If
so, then these will hammer your i/o subsystem, possibly canceling any
gain from freeing up more memory. So there is a need to understand what
sort of workload you have!
   

Will the increasing in effective cache size to arround 20 make a little 
bit
improvement ? Do you think so?
 

I would leave it at the figure you proposed (128897), and monitor your
performance.
(you can always increase it later and see what the effect is).
regards
Mark
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match