Re: [PERFORM] slow update of index during insert/copy

2008-09-05 Thread Thomas Finneid



Greg Smith wrote:
In practice, the "ordered" 
mode (the default for ext3) seems sufficient to prevent database 
corruption. There is a substantial performance hit to running in full 
journal mode like you're doing; 


where do you see which mode I am running in? I havent specified any 
modes in any config or commands, so I am assuming its using ext3 
defaults, which I am assuming is "ordered".


regards



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] too many clog files

2008-09-05 Thread Matthew Wakeling

On Fri, 5 Sep 2008, Duan Ligong wrote:

Well, we could not wait so long and just moved the old clog files.


Congratulations. You have probably just destroyed your database.

Matthew

--
Lord grant me patience, and I want it NOW!

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] You may need to increase mas_loks_per_trasaction

2008-09-05 Thread Jonas Pacheco
Considering a quad core server processor, 4 GBs of RAM memory, disk Sata
2. 

What is the recommended setting for the parameters:

max_connections:70
max_prepared_transactions?
shared_buffers? 
wal_buffers?
max_fsm_relations? 
max_fsm_pages?

Atenciosamente,
Jonas Rodrigo


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] You may need to increase mas_loks_per_trasaction

2008-09-05 Thread Claus Guttesen
> Considering a quad core server processor, 4 GBs of RAM memory, disk Sata
> 2.
>
> What is the recommended setting for the parameters:
>
> max_connections:70

Depends on how many clients that access the database.

> shared_buffers?

I have mine at 512 MB but I will lower it and see how it affects
performance. I have 16 GB in my server.

> max_fsm_relations?
> max_fsm_pages?

Perform a vacuum analyze verbose and look at the last few lines. This
will tell you whether you need to increase max_fsm_*.

Consider lowering random_page_cost so it favoes indexex more often
than seq. scans.

But if you don't get a decent raid-controller your data will move slow
and tuning will only make a minor difference.

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentler gamester is the soonest winner.

Shakespeare

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SAN and full_page_writes

2008-09-05 Thread Nikolas Everett
I seem to have answered my own question.  I'm sending the answer to the list
in case someone else has the same question one day.

According to the NetApp documentation, it does protect me from partial page
writes.  Thus, full_page_writes = off.


On Wed, Sep 3, 2008 at 12:03 PM, Nikolas Everett <[EMAIL PROTECTED]> wrote:

> I have the honor to be configuring Postgres to back into a NetApp FAS3020
> via fiber.
>
> Does anyone know if the SAN protects me from breakage due to partial page
> writes?
>
> If anyone has an SAN specific postgres knowledge, I'd love to hear your
> words of wisdom.
>
> For reference:
> [EMAIL PROTECTED] bonnie]$ ~neverett/bonnie++-1.03a/bonnie++
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version  1.03   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
> %CP
> localhost.lo 32104M 81299  94 149848  30 42747   8 45465  61 55528   4
> 495.5   0
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> %CP
>  16 + +++ + +++ + +++ + +++ + +++ +
> +++
>
>


Re: [PERFORM] indexing for distinct search in timestamp based table

2008-09-05 Thread Vladimir Sitnikov
You might get great improvement for '%' cases using index on
channel_name(,
start_time) and a little bit of pl/pgsql

Basically, you need to implement the following algorithm:
 1) curr_ = ( select  min() from ad_log )
 2) record_exists = ( select 1 from ad_log where =cur_ and
_all_other_conditions limit 1 )
 3) if record_exists==1 then add curr_ to the results
 3) curr_ = (select min() from ad_log where   >
 curr_ )
 4) if curr_ is not null then goto 2


I believe it might make sense implement this approach in the core (I would
call it "index distinct scan")

That could dramatically improve "select distinct  from " and
"select  from  group by " kind of queries when there
exists an index on  and a particular column has very small number of
distinct values.

For instance:  say a table has 10'000'000 rows, while column of interest has
only 20 distinct values. In that case, the database will be able to get
every of those 20 values in virtually 20 index lookups.

What does the community think about that?


Re: [PERFORM] too many clog files

2008-09-05 Thread Alvaro Herrera
Duan Ligong wrote:

> Greg wrote:
> > On Tue, 2 Sep 2008, Duan Ligong wrote:
> > > - Does Vacuum delete the old clog files?
> > 
> > Yes, if those transactions are all done.  One possibility here is that 
> > you've got some really long-running transaction floating around that is 
> > keeping normal clog cleanup from happening.  Take a look at the output 
> > from "select * from pg_stat_activity" and see if there are any really old 
> > transactions floating around.
> 
> Well, we could not wait so long and just moved the old clog files.
> The postgresql system is running well.

Move the old clog files back where they were, and run VACUUM FREEZE in
all your databases.  That should clean up all the old pg_clog files, if
you're really that desperate.  This is not something that I'd recommend
doing on a periodic basis ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] too many clog files

2008-09-05 Thread Scott Marlowe
On Thu, Sep 4, 2008 at 8:58 PM, Duan Ligong <[EMAIL PROTECTED]> wrote:
> Thanks for your reply.
>
> Greg wrote:
>> On Tue, 2 Sep 2008, Duan Ligong wrote:
>> > - Does Vacuum delete the old clog files?
>>
>> Yes, if those transactions are all done.  One possibility here is that
>> you've got some really long-running transaction floating around that is
>> keeping normal clog cleanup from happening.  Take a look at the output
>> from "select * from pg_stat_activity" and see if there are any really old
>> transactions floating around.
>
> Well, we could not wait so long and just moved the old clog files.
> The postgresql system is running well.
> But now the size of pg_clog has exceeded 50MB and there
> are 457 clog files.

That is absolutely not the thing to do.  Put them back, and do a
dump-restore on the database if you need to save a few hundred megs on
the drive.  Deleting files from underneath postgresql is a great way
to break your database in new and interesting ways which are often
fatal to your data.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] You may need to increase mas_loks_per_trasaction

2008-09-05 Thread Greg Smith

On Fri, 5 Sep 2008, Jonas Pacheco wrote:


max_prepared_transactions?


This is covered pretty well by the documentation: 
http://www.postgresql.org/docs/current/static/runtime-config-resource.html


There are suggestions for everything else you asked about (and a few more 
things you should also set) at 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance