On Mon, 21 Sep 2009, Alan McKay wrote:
We have explain and analyze which tell us about the cost of a query
time-wise, but what does one use to determine (and trace / predict?)
memory consumption?
In Postgres, memory consumption for all operations is generally capped at
the value of work_mem. H
astro77 writes:
> Kevin Grittner wrote:
>> I would try to minimize how many XML values it had to read, parse, and
>> search. The best approach that comes to mind would be to use tsearch2
>> techniques (with a GIN or GiST index on the tsvector) to identify
>> which rows contain 'fdc3da1f-060f-4c34
On Mon, Sep 21, 2009 at 4:08 PM, Robert Haas wrote:
> Setting work_mem too high is a frequent cause of problems of this sort, I
> think.
Too high? How high is too high?
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of "In Defense of Food"
--
Se
On Tue, Sep 22, 2009 at 1:36 PM, Alan McKay wrote:
> Too high? How high is too high?
in a very simple scenario, you have 100 connections opened, and all of
them run the query that was the reason you bumped work_mem to 256M.
All of the sudden postgresql starts to complain about lack of ram,
beca
> Best practice to avoid that, is to bump the work_mem temporarily
> before the query, and than lower it again, lowers the chance of memory
> exhaustion.
Interesting - I can do that dynamically?
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of "In D
On Mon, Sep 21, 2009 at 5:39 PM, Scott Marlowe wrote:
> I'm looking at running session servers in ram. All the data is
> throw-away data, so my plan is to have a copy of the empty db on the
> hard drive ready to go, and have a script that just copies it into ram
> and starts the db there. We're
On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay wrote:
>> Best practice to avoid that, is to bump the work_mem temporarily
>> before the query, and than lower it again, lowers the chance of memory
>> exhaustion.
>
> Interesting - I can do that dynamically?
you can do set work_mem=128M; select 1; set
Hello
this is maybe off topic. Do you know memcached? We use it without
postgresql six or seven months for short-live data with big success.
regards
Pavel Stehule
2009/9/22 Greg Spiegelberg :
> On Mon, Sep 21, 2009 at 5:39 PM, Scott Marlowe
> wrote:
>>
>> I'm looking at running session servers
* Scott Marlowe [090921 19:39]:
> I'm looking at running session servers in ram. All the data is
> throw-away data, so my plan is to have a copy of the empty db on the
> hard drive ready to go, and have a script that just copies it into ram
> and starts the db there. We're currently IO write bou
On Tue, Sep 22, 2009 at 2:42 AM, Magnus Hagander wrote:
> That's not true at all.
>
> If you have many relations in your cluster that have at some point been
> touched, the starts collector can create a *significant* load on the I/o
> system. I've come across several cases where the only choice wa
Dear all
I am having a problem of high cpu loads in my postgres server during peak
time. Following are the
details of my setup (details as per the postgres wiki) .
** PostgreSQL version
o Run "select pg_version();" in psql or PgAdmin III and provide the
full, exact output.*
clusterno
Hey Everyone,
So, I have a nice postgreSQL server (8.4) up and running our
database. I even managed to get master->slave going without trouble
using the excellent skytools.. however, I want to maximize speed and the
hot updates where possible, so, I am wanting to prune unused indexes
from t
On Tue, Sep 22, 2009 at 9:54 AM, Shiva Raman wrote:
> Dear all
>
> I am having a problem of high cpu loads in my postgres server during peak
> time. Following are the
> details of my setup (details as per the postgres wiki) .
>
> * PostgreSQL version
> o Run "select pg_version();" in ps
Shiva Raman wrote:
Dear all
I am having a problem of high cpu loads in my postgres server during
peak time. Following are the
details of my setup (details as per the postgres wiki) .
*Following is the output of TOP command during offpeak time.*
top - 18:36:56 up 77 days, 20:33, 1 user,
Andy Colson wrote:
Shiva Raman wrote:
Dear all
I am having a problem of high cpu loads in my postgres server during
peak time. Following are the
details of my setup (details as per the postgres wiki) .
*Following is the output of TOP command during offpeak time.*
top - 18:36:56 up 77 da
On Tue, Sep 22, 2009 at 7:35 PM, Stef Telford wrote:
> Hey Everyone,
> So, I have a nice postgreSQL server (8.4) up and running our database. I
> even managed to get master->slave going without trouble using the excellent
> skytools.. however, I want to maximize speed and the hot updates where
Stef Telford wrote:
Hey Everyone,
So, I have a nice postgreSQL server (8.4) up and running our
database. I even managed to get master->slave going without trouble
using the excellent skytools.. however, I want to maximize speed and the
hot updates where possible, so, I am wanting to prune u
> -Mensaje original-
> De: Shiva Raman
> Enviado el: Martes, 22 de Septiembre de 2009 10:55
> Para: pgsql-performance@postgresql.org
> Asunto: [PERFORM] High CPU load on Postgres Server during
> Peak times
>
> Dear all
>
> I am having a problem of high cpu loads in my postgres
On Monday 21 September 2009, Scott Marlowe wrote:
> I'm looking at running session servers in ram.
> Does anybody any real world experience here or any words of sage
> advice before I go off and start testing this?
Use memcached for session data.
--
"No animals were harmed in the recording of t
Alan Hodgson wrote:
> On Monday 21 September 2009, Scott Marlowe wrote:
>> I'm looking at running session servers in ram.
>
> Use memcached for session data.
IMHO postgres is more appropriate for some types of session data.
One of the apps I work on involves session data that consists of
geospa
On Tue, Sep 22, 2009 at 12:01 PM, Ron Mayer
wrote:
> Alan Hodgson wrote:
>> On Monday 21 September 2009, Scott Marlowe wrote:
>>> I'm looking at running session servers in ram.
>>
>> Use memcached for session data.
>
> IMHO postgres is more appropriate for some types of session data.
>
> One of t
Stef,
>>is it as simple as taking the output from ; select indexrelname
>> from pg_stat_user_indexes where idx_scan = 0 and idx_tup_read = 0 and
>> idx_tup_fetch = 0 ;
>>
>>And .. dropping ?
Almost that simple. The caveat is that indexes which are only used for
the enforcement of unique
22 matches
Mail list logo