[GENERAL] maximum amount of data to be written during checkpoint?

2008-01-24 Thread hubert depesz lubaczewski
hi,
what is the maximum amount of data to be written in checkpoint?

i always assumed this to be .

but some last tests show it to be much more. what am i missing?

best regards,

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] retry: converting ASCII to UTF-8

2008-01-24 Thread Albe Laurenz
Tom Hart wrote:
> Originally I set up the database to use ASCII encoding. However we
> have a large base of Spanish speaking members and services,
> and we need utf-8 encoding to maintain and support the extended character
> sets. In my naivety I thought it would be a relatively simple process 
> to convert the db but I've found this to not be the case. I tried doing 
> a dump and restore into a new database with the proper encoding, but 
> pg_restore is getting hung up on one of the tables, our largest by far 
> (~1gb, not huge I know). When I tried pg_restore from a command line
> (I was using pgAdmin, I know i'm a nub) I received this error.
> 
> C:\Program Files\PostgreSQL\8.2\bin>pg_restore.exe -i -h 192.168.1.xxx -p 
> 5432 -U foobar -d warehouse_utf8 -a -t "transaction" -v 
> "O:\foo\bar\pg_dump_transaction.backup"
> pg_restore: connecting to database for restore
> Password:
> pg_restore: restoring data for table "transaction"
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 1706; 0 17861 TABLE DATA 
> transaction foobar
> pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence for 
> encoding "UTF8": 0xc52f
> HINT:  This error can also happen if the byte sequence does not match the 
> encoding expected by the server, which is controlled by "client_encoding".
> CONTEXT:  COPY transaction, line 209487
> WARNING: errors ignored on restore: 1

You will have to figure out in which encoding the data really are.

SQL_ASCII will allow anything in, and the client is responsible for
feeding the right thing.

Somebody stored a 0xC5 in the database, which is not ASCII.
In WIN-1252, ISO8859-1, and ISO8859-15 that would be Å.
Could that be?

Once you find out the correct encoding, you can uye the -E switch of
pg_dump to set that encoding for your exported data.

If your clients entered consistent data, that should work.

If different clients used different encodings, you might end up
sorting it out manually...

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ascii to utf-8

2008-01-24 Thread Tomasz Ostrowski
On Wed, 23 Jan 2008, Tom Hart wrote:

>>> pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence 
>>> for encoding "UTF8": 0xc52f
>>
>> Try editing your dump-file and change the line which reads "SET 
>> client_encoding = 'SQL_ASCII';" to "SET client_encoding = 'LATIN1';"
>
> I tried making the changes you specified with notepad, wordpad, gVim, vim 
> and emacs and in each case pgAdmin (and pg_restore) complain about the dump 
> header being corrupted.

Try an "-E LATIN1" option in pg_dump. Do you at least know what is
the encoding of data in the database?

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] maximum amount of data to be written during checkpoint?

2008-01-24 Thread Simon Riggs
On Thu, 2008-01-24 at 11:21 +0100, hubert depesz lubaczewski wrote:

> what is the maximum amount of data to be written in checkpoint?
> 
> i always assumed this to be .
> 
> but some last tests show it to be much more. what am i missing?

Fsync will flush all outstanding writes, so you probably have some
writes that weren't yet flushed before the checkpoint started. If you
don't do any intermediate writing then this could be *all* of the data
you've written since the last checkpoint. If you want to reduce this,
set parameters to perform more regular writes from filesystem to disk.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_xlog and standby - SOLVED

2008-01-24 Thread Roberto Scattini
On Jan 23, 2008 11:07 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Wed, 23 Jan 2008, Roberto Scattini wrote:
>
> > the problem that im having is that i have A LOT of
> > archive files on pg_xlog dir, and thats because the archive_command
> > keeps failing (the standby server had filled his disk with archives
> > received but not proccesed), so now, i dont know how i can remove
> > those files and start again...
>
> Under normal operation the checkpoint process will look at the number of
> already created archive files, keep around up to (2*checkpoint_segments+1)
> of them for future use, and delete the rest of them.  You never delete
> them yourself, the server will take care of that automatically once it
> gets to where it makes that decision.  If you set checkpoint_segments to
> some very high number they can end up taking many GB worth of storage,
> increasing that parameter has at least two costs associated with it (the
> other being a longer recovery time).
>

i had commented archive_command in the main server but i only made a
reload. now i made a restart and almost all files in $DATA/pg_xlog/
dir are gone, and the server works properly :D

question: all the parameters in postgresql.conf need a restart to take effect?

thanks all for your help.

-- 
Roberto Scattini
 ___ _
 ))_) __ )L __
((__)(('(( ((_)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] changing the default directory

2008-01-24 Thread User Map
hi,
i am new to this postgre, i m running postgre under windows server 2003. 
postgres stores the database/data in the directory "C:\Program 
Files\PostgreSQL\8.2\data" by default. i want to change this directory to some 
other drive. is it posible to do this without comand prompt. if it is not then 
how to do that step by step. and does this one time directory change applies to 
every database that i will create through pgadminIII.
  will be thankful to any help.
map user.

   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

Re: [GENERAL] changing the default directory

2008-01-24 Thread Raymond O'Donnell

On 24/01/2008 13:02, User Map wrote:

i am new to this postgre, i m running postgre under windows server 2003. 
postgres stores the database/data in the directory "C:\Program 
Files\PostgreSQL\8.2\data" by default. i want to change this directory 
to some other drive. is it posible to do this without comand prompt. if 
it is not then how to do that step by step. and does this one time 
directory change applies to every database that i will create through 
pgadminIII.


Hi,

The data directory is set by using initdb. If you installed using the 
pgInstaller, then setting the data directory was one of the steps along 
the wayand I don't think you can change that using pgInstaller 
without un- and re-installing PostgreSQL altogether (others will correct 
me if I'm wrong).


However, it's easy to run initdb from the command line, and this will 
let you create a new data directory without having to uninstall first.

Have a look at the docs, here:

  http://www.postgresql.org/docs/8.2/static/creating-cluster.html

It's pretty much the same on Windows as on *nix.

HTH,

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-24 Thread Stefan Schwarzer

As for the real problem (on the same hardware), when you rebuilt
Postgres on your new machine did you change any of the configure
options that MacPorts would have used from what would have been used
previously (I assume they can be overridden)?


There's not that much that can be overridden that would affect the
layout of pg_control.  The only other thing I can think of at the
moment is that moving from 32-bit to 64-bit time_t can screw things
up --- but that shouldn't affect the interpretation of the pg_control
version number, which as already noted certainly looks like it's the
wrong endianness.

Stefan, could you post the actual pg_control file as a binary
attachment?



The "old" pg_control file is attached...


So, I am one step further, that is, I installed Tiger - and then  
Postgres 8.1 - on a PPC MacMini.


After running initdb the postmaster started smoothly. I stopped it,  
copied my database files into the same location, started the  
postmaster again, and then got this error message:


	schwarzers-mac-mini:/usr/local/pgsql schwarzer$ /usr/local/bin/pg_ctl  
-D /usr/local/pgsql/data -l postgres.log start
	pg_ctl: could not open PID file "/usr/local/pgsql/data/ 
postmaster.pid": Permission denied



As one can see from the following listing, there is no  
"postmaster.pid". And the settings seemed to be (for me) the same as  
in the old database tree:



schwarzers-mac-mini:/usr/local/pgsql schwarzer$ ls -l
total 0
drwx--   16 postgres  admin   544 Jan 23 18:23 data
drwx--   15 postgres  admin   510 Jan 23 17:50 data.orig


schwarzers-mac-mini:/usr/local/pgsql/data postgres$ ls -l
total 64
-rw---1 postgres  admin  4 Jan 23 18:23 PG_VERSION
drwx--   11 postgres  admin374 Jan 23 18:23 base
drwx--   27 postgres  admin918 Jan 23 18:23 global
drwx--   23 postgres  admin782 Jan 23 18:23 pg_clog
-rw---1 postgres  admin   3396 Jan 23 18:23 pg_hba.conf
-rw---1 postgres  admin   1460 Jan 23 18:23 pg_ident.conf
drwx--4 postgres  admin136 Jan 23 18:23 pg_multixact
drwx--3 postgres  admin102 Jan 23 18:23 pg_subtrans
drwx--2 postgres  admin 68 Jan 23 18:23 pg_tblspc
drwx--2 postgres  admin 68 Jan 23 18:23 pg_twophase
drwx--   11 postgres  admin374 Jan 23 18:23 pg_xlog
-rw---1 postgres  admin  13614 Jan 23 18:23 postgresql.conf
-rw---1 postgres  admin 74 Jan 23 18:23 postmaster.opts


schwarzers-mac-mini:/usr/local/pgsql/data.orig postgres$ ls -l
total 64
-rw---1 postgres  admin  4 Jan 23 17:49 PG_VERSION
drwx--5 postgres  admin170 Jan 23 17:49 base
drwx--   28 postgres  admin952 Jan 23 17:50 global
drwx--3 postgres  admin102 Jan 23 17:49 pg_clog
-rw---1 postgres  admin   3396 Jan 23 17:49 pg_hba.conf
-rw---1 postgres  admin   1460 Jan 23 17:49 pg_ident.conf
drwx--4 postgres  admin136 Jan 23 17:49 pg_multixact
drwx--3 postgres  admin102 Jan 23 17:49 pg_subtrans
drwx--2 postgres  admin 68 Jan 23 17:49 pg_tblspc
drwx--2 postgres  admin 68 Jan 23 17:49 pg_twophase
drwx--4 postgres  admin136 Jan 23 17:49 pg_xlog
-rw---1 postgres  admin  13680 Jan 23 17:49 postgresql.conf
-rw---1 postgres  admin 49 Jan 23 17:50 postmaster.opts



Thanks for any hints...

Stef




pg_control
Description: Binary data






---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] check constraint question

2008-01-24 Thread Tim Rupp
Hi list, I was looking for a bit of clarification on a check
constraint that I have on some tables.

I was following the example in the partitioning documentation

http://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html

And got it to work, but when I tried to apply the same idea to my
tables, it appeared the check constraints weren't being used. I have a
master table that looks like this


mydb=# \d flows
 Table "public.flows"
   Column   |Type | Modifiers
+-+---
 id | integer |
 height | integer |
 start_time | date | not null
 end_time   | date | not null
 srcint | integer |


and a bunch of inherited tables that have the following constraint

CHECK (start_time >= '2008-01-01'::date AND end_time < '2008-01-01'::date)

and when i do the same query (as in the documentation) on the table, I
get a bunch of sequential scans in the planner, and it appears to
ignore my constraints

netflow=# explain SELECT count(*) FROM flows WHERE start_time >= DATE
'2008-01-23' AND end_time < '2008-01-26'::date;
   QUERY PLAN

 Aggregate  (cost=232.09..232.10 rows=1 width=0)
   ->  Append  (cost=0.00..231.26 rows=330 width=0)
 ->  Seq Scan on flows  (cost=0.00..12.02 rows=1 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_2008 flows  (cost=0.00..15.55 rows=41 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_200801 flows  (cost=0.00..15.55 rows=41 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_20080122 flows  (cost=0.00..15.55
rows=41 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_20080121 flows  (cost=0.00..15.55
rows=41 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_20080120 flows  (cost=0.00..15.55
rows=41 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_20080101 flows  (cost=0.00..15.55
rows=41 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_20080102 flows  (cost=0.00..15.55
rows=41 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_20080103 flows  (cost=0.00..94.84 rows=1 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))
 ->  Seq Scan on flows_20080104 flows  (cost=0.00..15.55
rows=41 width=0)
   Filter: ((start_time >= '2008-01-23'::date) AND
(end_time < '2008-01-26'::date))


constraint_exclusion is set to on, and the example in the
documentation worked, so I'm sure it's just a confusion on my part. I
ended up coming to the conclusion that the check constraints need to
be on the same field???

When I changed the constraint to be

CHECK (start_time >= '2008-01-01'::date AND start_time < '2008-01-01'::date)

suddenly the planner started following the check and excluding the
tables properly.

So ultimately my question is, to be used by constraint exclusion, do
the checks need to be limited to a single field? If not, can I get
away with being able to use constraint exclusion while having a multi
field check like I showed earlier?

Thanks in advance!

-Tim

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Getting all tables into memory

2008-01-24 Thread Robert Fitzpatrick
I have a couple of servers running Postfix with amavisd-maia+SA+clamav
on FreeBSD 6.2 which use a central db server running PgSQL 8.2.4. My
issue is the bayes database causing SA TIMED OUT in the logs and want to
make sure I am getting everything into memory. The disk activity is high
on the db server, this is the average systat status...

/0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
 Load Average   |

  /0   /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
cpu  user| 
 nice|
   system|
interrupt|
 idle| 

  /0   /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
aacd0 MB/s 
  tps|X

I know I'm running RAID-5 and that is set to change, right now I'm just
focusing on getting my tables into memory. I have 4GB of memory in the
db server now, but the server indicates about only 3GB in dmesg, not
sure why this is, FreeBSD warns a small amount of over 4GB will not be
used when booting...

real memory  = 3220635648 (3071 MB)
avail memory = 3150565376 (3004 MB)

Here is my conf...

mx1# cat postgresql.conf
listen_addresses = '*'  # what IP address(es) to listen on; 
max_connections = 250
shared_buffers = 500MB  # min 128kB or max_connections*16kB
work_mem = 64MB # min 64kB
maintenance_work_mem = 256MB# min 1MB
max_fsm_pages = 256000  # min max_fsm_relations*16, 6 bytes each
checkpoint_segments = 100   # (value * 2 + 1) * 16MB
effective_cache_size = 1000MB
log_destination = 'syslog'
silent_mode = on
stats_start_collector = on  # needed for block or row stats
stats_row_level = on
autovacuum = off# enable autovacuum subprocess?
datestyle = 'iso, mdy'
lc_messages = 'C'   # locale for system error message 
lc_monetary = 'C'   # locale for monetary formatting
lc_numeric = 'C'# locale for number formatting
lc_time = 'C'   # locale for time formatting

I vacuum every night and expire my bayes db for SA. While I have plenty
of memory installed, it just does not seem to be using it considering my
disk status above? How can I tell if PgSQL is using memory or not and
how much? Excuse my ignorance on the matter, just learning how to
properly tune PostgreSQL.

My top 20 tables sizes are as follows...

maia=> SELECT relname, reltuples, relpages FROM pg_class ORDER BY relpages DESC 
limit 20;
relname|  reltuples  | relpages 
---+-+--
 pg_toast_70736|  846647 |   257452
 maia_mail |  375574 |63639
 maia_sa_rules_triggered   | 4.52118e+06 |38526
 bayes_token   |  447008 |20033
 maia_sa_rules_triggered_pkey  | 4.52118e+06 |17821
 bayes_token_idx1  |  447008 |11437
 maia_mail_recipients  |  377340 | 9867
 maia_sa_rules |1578 | 8501
 token_idx |  377340 | 8053
 envelope_to_received_date_idx |  375574 | 7202
 pg_toast_70736_index  |  846647 | 4719
 maia_mail_idx_received_date   |  375574 | 3703
 maia_mail_recipients_pkey |  377340 | 3471
 bayes_token_pkey  |  447008 | 3200
 awl_pkey  |  189259 | 2965
 maia_mail_recipients_idx_recipient_id |  377340 | 2696
 awl   |  189259 | 2599
 maia_stats| 185 | 2545
 bayes_seen_pkey   |  174501 | 2433
 bayes_seen|  174501 | 2238
(20 rows)


-- 
Robert


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_xlog and standby - SOLVED

2008-01-24 Thread Erik Jones


On Jan 24, 2008, at 6:09 AM, Roberto Scattini wrote:


On Jan 23, 2008 11:07 PM, Greg Smith <[EMAIL PROTECTED]> wrote:

On Wed, 23 Jan 2008, Roberto Scattini wrote:


the problem that im having is that i have A LOT of
archive files on pg_xlog dir, and thats because the archive_command
keeps failing (the standby server had filled his disk with archives
received but not proccesed), so now, i dont know how i can remove
those files and start again...


Under normal operation the checkpoint process will look at the  
number of
already created archive files, keep around up to  
(2*checkpoint_segments+1)
of them for future use, and delete the rest of them.  You never  
delete
them yourself, the server will take care of that automatically  
once it
gets to where it makes that decision.  If you set  
checkpoint_segments to
some very high number they can end up taking many GB worth of  
storage,
increasing that parameter has at least two costs associated with  
it (the

other being a longer recovery time).



i had commented archive_command in the main server but i only made a
reload. now i made a restart and almost all files in $DATA/pg_xlog/
dir are gone, and the server works properly :D

question: all the parameters in postgresql.conf need a restart to  
take effect?


thanks all for your help.


Ah, yes, that's a gotcha with the archive_command, at least I know it  
was a gotme at one point :)  To disable archiving, set  
archive_command="", commenting it out won't change it.  The only  
config paramaters that require a restart to take effect are those  
that are specifically noted as such in the manual section on server  
configuration.  However, commenting a config is not necessarily  
equivalent to disabling something.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] changing the default directory

2008-01-24 Thread User Map
On doing the following in comand prompt
  C:\program files\postgresql\8.2\bin> initdb -D C:\confc
  it says
  the program "postgres" is neded by initdb but was not found in the 
  same directory as "C:\program files\postgresql\8.2\bin/initdb". check 
  your installation.
  if i use:
  root# mkdir /usr/local/pgsql/data
root# chown postgres /usr/local/pgsql/data
  then its says 
'chown' is not recognized as an internal or external command.
  where is the problem. i think i have installed the postgre through windows 
installer. well how to check that how i have installed postgre.
  thanks for any help

Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
  On 24/01/2008 13:02, User Map wrote:

> i am new to this postgre, i m running postgre under windows server 2003. 
> postgres stores the database/data in the directory "C:\Program 
> Files\PostgreSQL\8.2\data" by default. i want to change this directory 
> to some other drive. is it posible to do this without comand prompt. if 
> it is not then how to do that step by step. and does this one time 
> directory change applies to every database that i will create through 
> pgadminIII.

Hi,

The data directory is set by using initdb. If you installed using the 
pgInstaller, then setting the data directory was one of the steps along 
the wayand I don't think you can change that using pgInstaller 
without un- and re-installing PostgreSQL altogether (others will correct 
me if I'm wrong).

However, it's easy to run initdb from the command line, and this will 
let you create a new data directory without having to uninstall first.
Have a look at the docs, here:

http://www.postgresql.org/docs/8.2/static/creating-cluster.html

It's pretty much the same on Windows as on *nix.

HTH,

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---


   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

Re: [GENERAL] Getting all tables into memory

2008-01-24 Thread Bill Moran
In response to Robert Fitzpatrick <[EMAIL PROTECTED]>:

> I have a couple of servers running Postfix with amavisd-maia+SA+clamav
> on FreeBSD 6.2 which use a central db server running PgSQL 8.2.4. My
> issue is the bayes database causing SA TIMED OUT in the logs and want to
> make sure I am getting everything into memory. The disk activity is high
> on the db server, this is the average systat status...
> 
> /0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
>  Load Average   |
> 
>   /0   /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
> cpu  user| 
>  nice|
>system|
> interrupt|
>  idle| 
> 
>   /0   /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
> aacd0 MB/s 
>   tps|X

What does "top -m io" look like?

> I know I'm running RAID-5 and that is set to change, right now I'm just
> focusing on getting my tables into memory.

This is the wrong approach.  You've not yet determined that the (alleged)
low use of memory is actually the problem.

Turn on query time logging for a few hours, then grab the PG log files
and run them through pgFouine.

> I have 4GB of memory in the
> db server now, but the server indicates about only 3GB in dmesg, not
> sure why this is, FreeBSD warns a small amount of over 4GB will not be
> used when booting...

http://www.freebsd.org/doc/en_US.ISO8859-1/books/faq/troubleshoot.html#PAE

> Here is my conf...
> 
> mx1# cat postgresql.conf
> listen_addresses = '*'  # what IP address(es) to listen on; 
> max_connections = 250
> shared_buffers = 500MB  # min 128kB or max_connections*16kB
> work_mem = 64MB # min 64kB
> maintenance_work_mem = 256MB# min 1MB
> max_fsm_pages = 256000  # min max_fsm_relations*16, 6 bytes each
> checkpoint_segments = 100   # (value * 2 + 1) * 16MB
> effective_cache_size = 1000MB
> log_destination = 'syslog'
> silent_mode = on
> stats_start_collector = on  # needed for block or row stats
> stats_row_level = on
> autovacuum = off# enable autovacuum subprocess?
> datestyle = 'iso, mdy'
> lc_messages = 'C'   # locale for system error message 
> lc_monetary = 'C'   # locale for monetary formatting
> lc_numeric = 'C'# locale for number formatting
> lc_time = 'C'   # locale for time formatting
> 
> I vacuum every night and expire my bayes db for SA. While I have plenty
> of memory installed,

Says who?  You seem to be making a lot of assumptions here.  What evidence
do you have to show that 4G is "plenty" of memory?

> it just does not seem to be using it considering my
> disk status above?

Huh?

> How can I tell if PgSQL is using memory or not and
> how much?

Well, top is helpful.  Also, consider installing the pg_buffercache addon
so you can see how much of your shared_buffers is being used.

I'm not entirely convinced that memory is your problem, as you've shown
no evidence.  vacuuming every night may not be sufficient, please show
us the output of "vacuum analyze verbose"

Also, once you've gathered some information on slow queries (using
pgFouine as described above) run explain analyze on the slowest ones
and see if you would benefit from adding some indexes.

Besides, you've already mentioned RAID5, if the system is slow because
it's blocking on disk writes, you can add 32T of RAM and it's not going
to speed things up any.

> Excuse my ignorance on the matter, just learning how to
> properly tune PostgreSQL.
> 
> My top 20 tables sizes are as follows...
> 
> maia=> SELECT relname, reltuples, relpages FROM pg_class ORDER BY relpages 
> DESC limit 20;
> relname|  reltuples  | relpages 
> ---+-+--
>  pg_toast_70736|  846647 |   257452
>  maia_mail |  375574 |63639
>  maia_sa_rules_triggered   | 4.52118e+06 |38526
>  bayes_token   |  447008 |20033
>  maia_sa_rules_triggered_pkey  | 4.52118e+06 |17821
>  bayes_token_idx1  |  447008 |11437
>  maia_mail_recipients  |  377340 | 9867
>  maia_sa_rules |1578 | 8501
>  token_idx |  377340 | 8053
>  envelope_to_received_date_idx |  375574 | 7202
>  pg_toast_70736_index  |  846647 | 4719
>  maia_mail_idx_received_date   |  375574 | 3703
>  maia_mail_recipients_pkey |  377340 | 3471
>  bayes_token_pkey  |  447008 | 3200
>  awl_pkey  |  189259 | 2965
>  maia_mail_recipients_idx_recipient_id |  377340 | 2696
>  awl

Re: [GENERAL] check constraint question

2008-01-24 Thread Tom Lane
"Tim Rupp" <[EMAIL PROTECTED]> writes:
> ... a bunch of inherited tables that have the following constraint

> CHECK (start_time >= '2008-01-01'::date AND end_time < '2008-01-01'::date)

> and when i do the same query (as in the documentation) on the table, I
> get a bunch of sequential scans in the planner, and it appears to
> ignore my constraints

> netflow=# explain SELECT count(*) FROM flows WHERE start_time >= DATE
> '2008-01-23' AND end_time < '2008-01-26'::date;

I don't think you've thought this through carefully.  That WHERE
condition is not inconsistent with that CHECK constraint, ie, there
could be some rows in the table that meet the WHERE.  In fact,
a set of constraints of this form don't represent a unique partitioning
do they?  (They might if you added the additional constraint that
start_time <= end_time, but that's not explicit here.)

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-24 Thread Tom Lane
Stefan Schwarzer <[EMAIL PROTECTED]> writes:
> After running initdb the postmaster started smoothly. I stopped it,  
> copied my database files into the same location, started the  
> postmaster again, and then got this error message:

>   schwarzers-mac-mini:/usr/local/pgsql schwarzer$ /usr/local/bin/pg_ctl  
> -D /usr/local/pgsql/data -l postgres.log start
>   pg_ctl: could not open PID file "/usr/local/pgsql/data/ 
> postmaster.pid": Permission denied

You're trying to start the server as yourself (user schwarzer) but the
files all belong to user postgres:

> drwx--   16 postgres  admin   544 Jan 23 18:23 data

The files have to be owned by the user that runs the server.  If you
want to switch over to doing that as schwarzer, a quick "chown -R"
will help.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] change the default data directory

2008-01-24 Thread User Map
hi,
i am new to this postgre, i m running postgre under windows server 2003. 
postgres stores the database/data in the directory "C:\Program 
Files\PostgreSQL\8.2\data" by default. i want to change this directory to some 
other drive. is it posible to do this without comand prompt. if it is not then 
how to do that step by step. and does this one time directory change applies to 
every database that i will create through pgadminIII.
  will be thankful to any help.
map user.

   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-24 Thread Stefan Schwarzer


On Jan 24, 2008, at 4:41 PM, Tom Lane wrote:


Stefan Schwarzer <[EMAIL PROTECTED]> writes:

After running initdb the postmaster started smoothly. I stopped it,
copied my database files into the same location, started the
postmaster again, and then got this error message:


	schwarzers-mac-mini:/usr/local/pgsql schwarzer$ /usr/local/bin/ 
pg_ctl

-D /usr/local/pgsql/data -l postgres.log start
pg_ctl: could not open PID file "/usr/local/pgsql/data/
postmaster.pid": Permission denied


You're trying to start the server as yourself (user schwarzer) but the
files all belong to user postgres:


drwx--   16 postgres  admin   544 Jan 23 18:23 data


The files have to be owned by the user that runs the server.  If you
want to switch over to doing that as schwarzer, a quick "chown -R"
will help.


Oh, stupid me! Gush, I'll never learn it...

But nevertheless:

When I start the postmaster it seems ok. But there is no process  
running. When I try to stop it it says:


pg_ctl: PID file "/usr/local/pgsql/data/postmaster.pid" does not exist
Is postmaster running?

I slowly get the feeling that all my efforts are in vain...

Stef

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-24 Thread Tom Lane
Stefan Schwarzer <[EMAIL PROTECTED]> writes:
> When I start the postmaster it seems ok. But there is no process  
> running.

Then it didn't really start successfully.  (Unless you use the -w
option, "pg_ctl start" just launches the postmaster --- it doesn't
wait around to see what happens.)  You need to look into the log
file to see what the problem was.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] can't create index with 'dowcast' row

2008-01-24 Thread Louis-David Mitterrand
Hi,

To constraint unique'ness of my visitors to a 24h periode I tried 
created a index including the 'date' part of the created_on timestamp:

CREATE UNIQUE INDEX visit_idx ON visit_buffer USING btree (id_session, 
id_story, created_on::date);

psql:visit_pkey.sql:5: ERROR:  syntax error at or near "::"
LINE 1: ...buffer USING btree (id_session, id_story, created_on::date);

and this:

CREATE UNIQUE INDEX visit_idx ON visit_buffer USING btree (id_session, 
id_story, extract(date from created_on));
psql:visit_pkey.sql:4: ERROR:  functions in index expression must be 
marked IMMUTABLE

How can I achieve what I am trying?

Thanks,

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-24 Thread Stefan Schwarzer

When I start the postmaster it seems ok. But there is no process
running.


Then it didn't really start successfully.  (Unless you use the -w
option, "pg_ctl start" just launches the postmaster --- it doesn't
wait around to see what happens.)  You need to look into the log
file to see what the problem was.


Ok, it's a bit confusing for a non-admin-user to enter into the pg- 
config to find out what parameters one has to uncomment in order to  
get the log working. Succeed after some while, got the postmaster  
working:


  827  p2  S  0:00.22 /usr/local/bin/postmaster -D /usr/local/ 
pgsql/data TERM=xterm-color SHELL=/bin/bash USER=postgres  
__CF_USER_TEXT_ENCODING=0x1F6:0:0 P

  829  p2  S  0:00.01 postgres: writer process
  830  p2  S  0:00.00 postgres: stats buffer process
  831  p2  S  0:00.00 postgres: stats collector process


What would you recommend now in order to get the data back? I have  
postgis data in the databases too, although this is not too important  
for me...


Thanks you so much for all your help!

Stef

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] PostgreSQL/PHP Application Server

2008-01-24 Thread Brian A. Seklecki (Mobile)
All:

Are there any frameworks / toolkits available, possibly as established
F/OSS projects, for web applications using PHP+PostgreSQL?

sf.net/google comes up short -- a few XML services and Perl+PgSQL hits.

By 'application server', as a marketing wank-word gleaned from
Oracle/IBM/BEA, essentially constituted by:

*) Templates for common data structures
*) Centralized deployment of applications as 'modules' using 
   a shared foundation
*) A row-level / user-level ('view based') object security model
*) Corporate Directory / Backend Integration (LDAP, Kerberos, PAM, NSS)

If such a project doesn't exist, I propose instantiating one.

#1 and #2 are easy.  #3 and #4 are showstoppers.

== Monolith Security Model ==

WRT security, the "monolithic security model", whereby:
- The application on your web systems connects as a proxy/meta
  'application user' role to PostgreSQL
- Who has unlimited authorization privileges
- Data access security is enforced entirely in-code

There is no shortage of discussion on the topic in the list archives,
but no conclusive answer documented anywhere on a commonly accepted
mode.

This is the kind of security model still recommended by 99% of the
PHP/PostgreSQL books on the market, and it keeps the "IT auditor"
industry alive. 

== Directory / Corporate Integration ==

WRT to LDAP/Kerberos, but mostly LDAP these days, the current PostgreSQL
authentication framework allows one to proxy _authentication_ requests
to a corporate LDAP system (we'll call this the 'pam_ldap' aspect) -- 

However, no equivalent of the 'nss_ldap' functionality exists -- e.g.,
automatic population of group/role membership from LDAP groups and/or
CN= attribute/value pair mapping into pg_catalog.pg_roles rows.

That would be a big boost...  


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] PostgreSQL/PHP Application Server

2008-01-24 Thread Brian A. Seklecki
All:

Are there any frameworks / toolkits available, possibly as established
F/OSS projects, for web applications using PHP+PostgreSQL?

sf.net/google comes up short -- a few XML services and Perl+PgSQL hits.

By 'application server', as a marketing wank-word gleaned from
Oracle/IBM/BEA, essentially constituted by:

*) Templates for common data structures
*) Centralized deployment of applications as 'modules' using 
   a shared foundation
*) A row-level / user-level ('view based') object security model
*) Corporate Directory / Backend Integration (LDAP, Kerberos, PAM, NSS)

If such a project doesn't exist, I propose instantiating one.

#1 and #2 are easy.  #3 and #4 are showstoppers.

== Monolith Security Model ==

WRT security, the "monolithic security model", whereby:
- The application on your web systems connects as a proxy/meta
  'application user' role to PostgreSQL
- Who has unlimited authorization privileges
- Data access security is enforced entirely in-code

There is no shortage of discussion on the topic in the list archives,
but no conclusive answer documented anywhere on a commonly accepted
mode.

This is the kind of security model still recommended by 99% of the
PHP/PostgreSQL books on the market, and it keeps the "IT auditor"
industry alive. 

== Directory / Corporate Integration ==

WRT to LDAP/Kerberos, but mostly LDAP these days, the current PostgreSQL
authentication framework allows one to proxy _authentication_ requests
to a corporate LDAP system (we'll call this the 'pam_ldap' aspect) -- 

However, no equivalent of the 'nss_ldap' functionality exists -- e.g.,
automatic population of group/role membership from LDAP groups and/or
CN= attribute/value pair mapping into pg_catalog.pg_roles rows.

That would be a big boost...  


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] check constraint question

2008-01-24 Thread Tim Rupp
On Jan 24, 2008 9:47 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Tim Rupp" <[EMAIL PROTECTED]> writes:
> > ... a bunch of inherited tables that have the following constraint
>
> > CHECK (start_time >= '2008-01-01'::date AND end_time < '2008-01-01'::date)
>
> > and when i do the same query (as in the documentation) on the table, I
> > get a bunch of sequential scans in the planner, and it appears to
> > ignore my constraints
>
> > netflow=# explain SELECT count(*) FROM flows WHERE start_time >= DATE
> > '2008-01-23' AND end_time < '2008-01-26'::date;
>
> I don't think you've thought this through carefully.  That WHERE
> condition is not inconsistent with that CHECK constraint, ie, there
> could be some rows in the table that meet the WHERE.  In fact,
> a set of constraints of this form don't represent a unique partitioning
> do they?  (They might if you added the additional constraint that
> start_time <= end_time, but that's not explicit here.)
>
> regards, tom lane
>

Thanks for the feedback. I think I can accomplish what I want to do.
If I keep the current constraints

CHECK (start_time >= '2008-01-01'::date AND start_time < '2008-01-02'::date)

I guarantee that the end_time will always be >= the start_time, so if
I wanted to select rows from between a time range, then I can have a
where clause which just adds +1 to the start_time and makes it <
something like

where start_time >= '2008-01-01'::date AND start_time < '2008-01-10'::date

should give the same rows as

where start_time >= '2008-01-01'::date AND end_time <= '2008-01-09'::date

but the former would need to scan much fewer tables than the latter.

In any event. I think I know which direction to go. Thanks a lot Tom!

-Tim

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Installing PostGreSQL - 2

2008-01-24 Thread Jamiil Abduqadir
I  am trying to install PostgreSQL from 'postgresql-8.3-dev1.exe', but the
installation terminates with an error message saying that it could not
create a file,
Here is the last bit of  'postgresql-8.3-dev1.txt', it just might mean
something to you 'cos it does not mean anything to me.


Property(C): SourceDir = C:\Temp\
Property(C): Manufacturer = PostgreSQL Global Development Group
Property(C): ProductCode = {500426A8-729D-4CE5-964B-0D0C410206B8}
Property(C): ProductLanguage = 1033
Property(C): ProductName = PostgreSQL 8.3-dev1
Property(C): ProductVersion = 8.3
Property(C): UpgradeCode = {61560409-8BD6-4BC9-A445-3C93D51FC05D}
Property(C): INTERNALLAUNCH = 1
Property(C): UILANG = 1033
Property(C): _Title = {\Arial8Bold}
Property(C): DefaultUIFont = MSSansSerif8
Property(C): ALLUSERS = 1
Property(C): SERVICEID = pgsql-8.3-dev1
Property(C): CREATESERVICEUSER = 1
Property(C): SERVICEOK = 1
Property(C): INITDBOK = 1
Property(C): HAS_PERL = 0
Property(C): HAS_TCL = 0
Property(C): HAS_PYTHON = 0
Property(C): HAS_JAVA = 0
Property(C): NOSHORTCUTS = 0
Property(C): SKIPTSCHECK = 0
Property(C): ErrorDialog = ErrorDlg
Property(C): SecureCustomProperties =
ADMINPACK;BTREE_GIST;CHKPASS;CREATESERVICEUSER;CUBE;DBLINK;DOINITDB;DOSERVICE;EARTHDISTANCE;ENCODING;FUZZYSTRMATCH;HSTORE;INT_AGGREGATE;INT_ARRAY;INTERNALLAUNCH;ISN;LISTENPORT;LO;LOCALE;LTREE;PERMITREMOTE;PG_BUFFERCACHE;PG_FREESPACEMAP;PG_TRGM;PGCRYPTO;PGROWLOCKS;PGSTATTUPLE;PL_JAVA;PL_PERL;PL_PERLU;PL_PGSQL;PL_PYTHONU;PL_TCL;PL_TCLU;RUNSTACKBUILDER;SEG;SERVICEACCOUNT;SERVICEDOMAIN;SERVICENAME;SERVICEPASSWORD;SERVICEPASSWORDV;SKIPTSCHECK;SPI_AUTOINC;SPI_INSERT_USERNAME;SPI_MODDATETIME;SPI_REFINT;SPI_TIMETRAVEL;SSLINFO;SUPERPASSWORD;SUPERUSER;TABLEFUNC;TSEARCH2;XML2
Property(C): PackageCode = {1858FEA1-D5A1-4F9D-90F2-743EFB3667C9}
Property(C): ProductState = -1
Property(C): PackagecodeChanging = 1
Property(C): CURRENTDIRECTORY = C:\WINDOWS\system32
Property(C): CLIENTUILEVEL = 0
Property(C): CLIENTPROCESSID = 2244
Property(C): VersionDatabase = 150
Property(C): VersionMsi = 3.01
Property(C): WindowsBuild = 2600
Property(C): ServicePackLevel = 2
Property(C): ServicePackLevelMinor = 0
Property(C): MsiNTProductType = 1
Property(C): WindowsFolder = C:\WINDOWS\
Property(C): WindowsVolume = C:\
Property(C): SystemFolder = C:\WINDOWS\system32\
Property(C): System16Folder = C:\WINDOWS\system\
Property(C): RemoteAdminTS = 1
Property(C): TempFolder = C:\DOCUME~1\Falcon\LOCALS~1\Temp\
Property(C): CommonFilesFolder = C:\Program Files\Common Files\
Property(C): AppDataFolder = C:\Documents and Settings\Falcon\Application
Data\
Property(C): FavoritesFolder = C:\Documents and Settings\Falcon\Favorites\
Property(C): NetHoodFolder = C:\Documents and Settings\Falcon\NetHood\
Property(C): PersonalFolder = C:\Documents and Settings\Falcon\My Documents\
Property(C): PrintHoodFolder = C:\Documents and Settings\Falcon\PrintHood\
Property(C): RecentFolder = C:\Documents and Settings\Falcon\Recent\
Property(C): SendToFolder = C:\Documents and Settings\Falcon\SendTo\
Property(C): TemplateFolder = C:\Documents and Settings\All Users\Templates\
Property(C): CommonAppDataFolder = C:\Documents and Settings\All
Users\Application Data\
Property(C): LocalAppDataFolder = C:\Documents and Settings\Falcon\Local
Settings\Application Data\
Property(C): MyPicturesFolder = C:\Documents and Settings\Falcon\My
Documents\My Pictures\
Property(C): AdminToolsFolder = C:\Documents and Settings\All Users\Start
Menu\Programs\Administrative Tools\
Property(C): StartupFolder = C:\Documents and Settings\All Users\Start
Menu\Programs\Startup\
Property(C): StartMenuFolder = C:\Documents and Settings\All Users\Start
Menu\
Property(C): DesktopFolder = C:\Documents and Settings\All Users\Desktop\
Property(C): FontsFolder = C:\WINDOWS\Fonts\
Property(C): GPTSupport = 1
Property(C): OLEAdvtSupport = 1
Property(C): ShellAdvtSupport = 1
Property(C): Intel = 6
Property(C): PhysicalMemory = 503
Property(C): VirtualMemory = 844
Property(C): AdminUser = 1
Property(C): LogonUser = Falcon
Property(C): UserSID = S-1-5-21-329068152-789336058-839522115-1003
Property(C): UserLanguageID = 1033
Property(C): ComputerName = M662
Property(C): SystemLanguageID = 1033
Property(C): ScreenX = 800
Property(C): ScreenY = 600
Property(C): CaptionHeight = 19
Property(C): BorderTop = 1
Property(C): BorderSide = 1
Property(C): TextHeight = 16
Property(C): ColorBits = 32
Property(C): TTCSupport = 1
Property(C): Time = 9:52:50
Property(C): Date = 1/24/2008
Property(C): MsiWin32AssemblySupport = 5.1.2600.2180
Property(C): RedirectedDllSupport = 2
Property(C): Privileged = 1
Property(C): USERNAME = M662
Property(C): DATABASE = C:\DOCUME~1\Falcon\LOCALS~1\Temp\51384b2.msi
Property(C): OriginalDatabase = C:\Temp\postgresql-8.3-dev1-int.msi
Property(C): SOURCEDIR = C:\Temp\
Property(C): VersionHandler = 3.01
Property(C): UILevel = 5
Property(C): ACTION = INSTALL
Property(C): EXECUTEACTION = INSTALL
Property(C): ROOTDRIVE = D:\
Property(C): CostingComplete

Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-24 Thread Martijn van Oosterhout
On Thu, Jan 24, 2008 at 05:10:51PM +0100, Stefan Schwarzer wrote:
> Oh, stupid me! Gush, I'll never learn it...
> 
> But nevertheless:
> 
> When I start the postmaster it seems ok. But there is no process  
> running. When I try to stop it it says:
> 
> pg_ctl: PID file "/usr/local/pgsql/data/postmaster.pid" does not exist
> Is postmaster running?

If it fails to start it must log somehwere.

If you run the postmaster directly from the console it'll get you into
singleuser mode, try that.

(PS. Have you tried pgfsck on the new machine. With the same byte-order
it should work much betteR).

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Installing PGSQL

2008-01-24 Thread Jamiil Abduqadir
1) Verify your TMP and TEMP environment variables (values must
> reference valid directory entries);


It was confirmed and no problem

>
> 2) Is there sufficient disk space for extract files from the
> PostgreSQL package into TMP and TEMP path reference on the environment
> vars?

Yes.

>
> 3) Also check out your user settings. For installation, you must log
> in with an administrative role (in Windows).

Yes.

-- 
Happiness has many doors, and when one of them closes another opens, yet we
spent so much time looking at the one that is shut that we don't  see the
one that just  opened.


Re: [GENERAL] pg_xlog and standby - SOLVED

2008-01-24 Thread Greg Smith

On Thu, 24 Jan 2008, Erik Jones wrote:

To disable archiving, set archive_command="", commenting it out won't 
change it...However, commenting a config is not necessarily equivalent 
to disabling something.


This is actually a problem that is corrected in the upcoming 8.3 release. 

From those release notes:


"Commenting out a parameter in postgresql.conf now causes it to revert to 
its default value.  [In 8.2 and earlier releases] commenting out an entry 
left the parameter's value unchanged until the next server restart."


This is probably why Roberto didn't see the expected behavior until a 
server restart.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] REINDEX on large DB vs. DROP INDEX/CREATE INDEX

2008-01-24 Thread Wes
Seems like it would be a common question, but I'm having problems finding an
answer in the archives on this...

I have a large database (now about 2 billion records), and about once a year
I have been dropping and recreating the indexes.  Recreating the foreign key
constraints takes as long or longer than the index creation.  Last year the
whole process took about 72 hours.  This year the DB is bigger.

I'm running 8.1.4.  Assume I have exclusive access to the DB.

1. Is there any advantage to doing the DROP/CREATE over just doing a REINDEX
DATABASE.  I seem to remember DROP/CREATE had some advantages, but can't
find the information.  Is there a performance hit with REINDEX during
creation because of locking issues?

2. I'm assuming REINDEX would avoid the time involved in recreating the
foreign key constraints?

3. With a REINDEX DATABASE, how can I monitor progress?

Wes



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] can't create index with 'dowcast' row

2008-01-24 Thread marcelo Cortez
Louis

 what if you create one "wrapper" function immutable?
 some thing like this.

CREATE OR REPLACE FUNCTION myextract(timestamp )
  RETURNS date  AS
$BODY$
BEGIN
  return extract(date from $1)   ;
END;
$BODY$
  LANGUAGE 'plpgsql' IMMUTABLE

 best regards 
 mdc 

 

 
--- Louis-David Mitterrand
<[EMAIL PROTECTED]> escribió:

> Hi,
> 
> To constraint unique'ness of my visitors to a 24h
> periode I tried 
> created a index including the 'date' part of the
> created_on timestamp:
> 
>   CREATE UNIQUE INDEX visit_idx ON visit_buffer USING
> btree (id_session, 
>   id_story, created_on::date);
> 
>   psql:visit_pkey.sql:5: ERROR:  syntax error at or
> near "::"
>   LINE 1: ...buffer USING btree (id_session,
> id_story, created_on::date);
> 
> and this:
> 
>   CREATE UNIQUE INDEX visit_idx ON visit_buffer USING
> btree (id_session, id_story, extract(date from
> created_on));
>   psql:visit_pkey.sql:4: ERROR:  functions in index
> expression must be marked IMMUTABLE
> 
> How can I achieve what I am trying?
> 
> Thanks,
> 
> ---(end of
> broadcast)---
> TIP 6: explain analyze is your friend
> 



  Yahoo! Encuentros.

Ahora encontrar pareja es mucho más fácil, probá el nuevo Yahoo! Encuentros 
http://yahoo.cupidovirtual.com/servlet/NewRegistration

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] can't create index with 'dowcast' row

2008-01-24 Thread marcelo Cortez
Sorry forgot to mention 

later try 

CREATE UNIQUE INDEX visit_idx ON visit_buffer(
id_session, id_story ,myextract(created_on));


best regards 

> Louis
> 
>  what if you create one "wrapper" function
> immutable?
>  some thing like this.
> 
> CREATE OR REPLACE FUNCTION myextract(timestamp )
>   RETURNS date  AS
> $BODY$
> BEGIN
>   return extract(date from $1)   ;
> END;
> $BODY$
>   LANGUAGE 'plpgsql' IMMUTABLE
> 
>  best regards 
>  mdc 
> 
>  
> 
>  
> --- Louis-David Mitterrand
> <[EMAIL PROTECTED]> escribió:
> 
> > Hi,
> > 
> > To constraint unique'ness of my visitors to a 24h
> > periode I tried 
> > created a index including the 'date' part of the
> > created_on timestamp:
> > 
> > CREATE UNIQUE INDEX visit_idx ON visit_buffer
> USING
> > btree (id_session, 
> > id_story, created_on::date);
> > 
> > psql:visit_pkey.sql:5: ERROR:  syntax error at or
> > near "::"
> > LINE 1: ...buffer USING btree (id_session,
> > id_story, created_on::date);
> > 
> > and this:
> > 
> > CREATE UNIQUE INDEX visit_idx ON visit_buffer
> USING
> > btree (id_session, id_story, extract(date from
> > created_on));
> > psql:visit_pkey.sql:4: ERROR:  functions in index
> > expression must be marked IMMUTABLE
> > 
> > How can I achieve what I am trying?
> > 
> > Thanks,
> > 
> > ---(end of
> > broadcast)---
> > TIP 6: explain analyze is your friend
> > 
> 
> 
> 
>   Yahoo! Encuentros.
> 
> Ahora encontrar pareja es mucho más fácil, probá el
> nuevo Yahoo! Encuentros
>
http://yahoo.cupidovirtual.com/servlet/NewRegistration
> 



  Tarjeta de crédito Yahoo! de Banco Supervielle.
Solicitá tu nueva Tarjeta de crédito. De tu PC directo a tu casa. 
www.tuprimeratarjeta.com.ar 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL/PHP Application Server

2008-01-24 Thread John DeSoi


On Jan 24, 2008, at 12:15 PM, Brian A. Seklecki wrote:


Are there any frameworks / toolkits available, possibly as established
F/OSS projects, for web applications using PHP+PostgreSQL?

sf.net/google comes up short -- a few XML services and Perl+PgSQL  
hits.


By 'application server', as a marketing wank-word gleaned from
Oracle/IBM/BEA, essentially constituted by:

*) Templates for common data structures
*) Centralized deployment of applications as 'modules' using
 a shared foundation
*) A row-level / user-level ('view based') object security model
*) Corporate Directory / Backend Integration (LDAP, Kerberos, PAM,  
NSS)


If such a project doesn't exist, I propose instantiating one.

#1 and #2 are easy.  #3 and #4 are showstoppers.



Drupal - http://drupal.org

I don't know if there is a module for #4; check http://drupal.org/project/Modules 
. The user/login system is extensible, so you could write your own.




John DeSoi, Ph.D.





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] changing the default directory

2008-01-24 Thread Martin Gainty
i agree and if your postgres data folder will be constant I would set PGDATA
environment variable to the folder location

M-
- Original Message -
From: "Raymond O'Donnell" <[EMAIL PROTECTED]>
To: "User Map" <[EMAIL PROTECTED]>
Cc: "pgsql" 
Sent: Thursday, January 24, 2008 8:18 AM
Subject: Re: [GENERAL] changing the default directory


> On 24/01/2008 13:02, User Map wrote:
>
> > i am new to this postgre, i m running postgre under windows server 2003.
> > postgres stores the database/data in the directory "C:\Program
> > Files\PostgreSQL\8.2\data" by default. i want to change this directory
> > to some other drive. is it posible to do this without comand prompt. if
> > it is not then how to do that step by step. and does this one time
> > directory change applies to every database that i will create through
> > pgadminIII.
>
> Hi,
>
> The data directory is set by using initdb. If you installed using the
> pgInstaller, then setting the data directory was one of the steps along
> the wayand I don't think you can change that using pgInstaller
> without un- and re-installing PostgreSQL altogether (others will correct
> me if I'm wrong).
>
> However, it's easy to run initdb from the command line, and this will
> let you create a new data directory without having to uninstall first.
> Have a look at the docs, here:
>
>http://www.postgresql.org/docs/8.2/static/creating-cluster.html
>
> It's pretty much the same on Windows as on *nix.
>
> HTH,
>
> Ray.
>
> ---
> Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
> [EMAIL PROTECTED]
> ---
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faq
>


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Installing PostGreSQL - 2

2008-01-24 Thread Dave Page
On Jan 24, 2008 5:28 PM, Jamiil Abduqadir <[EMAIL PROTECTED]> wrote:
> I  am trying to install PostgreSQL from 'postgresql-8.3-dev1.exe', but the
> installation terminates with an error message saying that it could not
> create a file,
> Here is the last bit of  'postgresql-8.3-dev1.txt ', it just might mean
> something to you 'cos it does not mean anything to me.

Please try 8.3RC2. 8.3-dev1 is a *very* old test build and not
something anyone should actually be using. Where did you find it
anyway?

Regards, Dave.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] REINDEX on large DB vs. DROP INDEX/CREATE INDEX

2008-01-24 Thread Tom Lane
Wes <[EMAIL PROTECTED]> writes:
> I'm running 8.1.4.  Assume I have exclusive access to the DB.

You really ought to update to 8.1.something-newer, but I digress.

> 1. Is there any advantage to doing the DROP/CREATE over just doing a REINDEX
> DATABASE.

No, not if you don't mind exclusive locks.  DROP together with CREATE
INDEX CONCURRENTLY might be nicer if you were trying to do this without
completely shutting down the DB, but if you aren't running normal
operations then just use REINDEX.

> 2. I'm assuming REINDEX would avoid the time involved in recreating the
> foreign key constraints?

Right, that's one reason to do it that way.

> 3. With a REINDEX DATABASE, how can I monitor progress?

It should give you a NOTICE after each table.

BTW, what have you got maintenance_work_mem set to?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Disk configurations....

2008-01-24 Thread Nathan Wilhelmi
Hello - Typically case of a software guy needing to spec hardware for a 
new DB server. Further typified case of not knowing exact amount of data 
and I/O patterns. So if you were to spec a disk system for a new general 
purpose PostgreSQL server any suggestions of what to start with?


Details I have:

1) We think 500GB is enough storage for the DB.
2) I would guess 66% read 33% write patterns, maybe closer to 50/50.

I would assume that we would want to go SCSI raid. Would you do the 
whole disk array as Raid 5 or whole you partition it up differently? 
Would you go with 3x300gb disks or would you use more smaller disks to 
get there?


Any other gotchas or suggestions to look for?

Thanks!

-Nate


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Getting all tables into memory

2008-01-24 Thread Robert Fitzpatrick
On Thu, 2008-01-24 at 10:46 -0500, Bill Moran wrote:
> In response to Robert Fitzpatrick <[EMAIL PROTECTED]>:
> 
> > How can I tell if PgSQL is using memory or not and
> > how much?
> 
> Well, top is helpful.  Also, consider installing the pg_buffercache addon
> so you can see how much of your shared_buffers is being used.

Well, all of it I guess from looking below? Again, just learning here...

maia=# select count(*) from pg_buffercache;
 count 
---
 64000
(1 row)
maia=# select count(*) from pg_buffercache where relfilenode is null;   
  
 count 
---
 0
(1 row)
maia=# SELECT c.relname, count(*) AS buffers
   FROM pg_class c INNER JOIN pg_buffercache b
   ON b.relfilenode = c.relfilenode INNER JOIN pg_database d
   ON (b.reldatabase = d.oid AND d.datname = current_database())
   GROUP BY c.relname
   ORDER BY 2 DESC LIMIT 10;
 relname | buffers 
-+-
 bayes_token |   16684
 bayes_token_idx1|   10264
 maia_sa_rules   |8501
 pg_toast_70736  |5898
 maia_mail   |4361
 maia_sa_rules_triggered |3913
 maia_mail_recipients|3603
 bayes_token_pkey|3199
 maia_stats  |2545
 token_idx   |2442
(10 rows)

Thanks again for any insight?

-- 
Robert


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] unique constraint

2008-01-24 Thread Dominique Bessette - Halsema
I'm getting the following error, and I think it's because when i insert into
postgres from geoserver's WFS, i first delete then insert the track, and
geoserver does this in succession but very fast.  If I personally test
inserting two tracks with the same guid in a row, then it works, but because
geoserver is going so fast it blows up and gives this error.  i'm looking
into fixing it another way but does any one have an idea of anything i can
do to avoid this error in postgres?


STATEMENT: INSERT INTO "public"."tracks"
("guid","classification","creationtimestamp","lastupdatetime","effectivetime","reportedtime","staletime","confidencevalue","behavioridentity","nationality","classificationcategory","classconfidencevalue","axislabels","uomlabels","aoumajaxis","aouminaxis","aouvalid","the_geom")
VALUES ('OOSTZEE','U','2008-01-22T21:05:15.366Z','2008-01-22T21:05:15.366Z
','2008-01-22T21:05:15.366Z','2008-01-22T21:05:15.366Z','2008-01-22T21:05:
15.366Z',1.0,'FRIEND','','SEA','1.0','Lat Long h','degree
degree',999,999,'true',setSRID('014029D8E757928E0D404B9C28CBD1244A'::geometry,4326))

ERROR: duplicate key violates unique constraint "tracks_guid_key"



 Track=# \d tracks
   Table "public.tracks"
 Column |   Type|  Modifiers
+---+--
 sid| integer   | not null default
nextval('tracks_sid_seq'::regclass)
 guid   | character varying | not null
 classification | character varying |
 creationtimestamp  | character varying |
 lastupdatetime | character varying |
 effectivetime  | character varying |
 reportedtime   | character varying |
 staletime  | character varying |
 confidencevalue| numeric   |
 behavioridentity   | character varying |
 nationality| character varying |
 classificationcategory | character varying |
 classconfidencevalue   | character varying |
 axislabels | character varying |
 uomlabels  | character varying |
 aoumajaxis | numeric   |
 aouminaxis | numeric   |
 aouvalid   | character varying |
 the_geom   | geometry  |
Indexes:
"tracks_pkey" PRIMARY KEY, btree (sid)
"tracks_guid_key" UNIQUE, btree (guid)
"tracks_the_geom_gist" gist (the_geom)
Check constraints:
"enforce_geotype_the_geom" CHECK (geometrytype(the_geom) = 'POINT'::text
OR the_geom IS NULL)
"enforce_srid_the_geom" CHECK (srid(the_geom) = 4326)


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-24 Thread Scott Marlowe
On Jan 24, 2008 10:41 AM, Stefan Schwarzer
<[EMAIL PROTECTED]> wrote:
> >> When I start the postmaster it seems ok. But there is no process
> >> running.
> >
> > Then it didn't really start successfully.  (Unless you use the -w
> > option, "pg_ctl start" just launches the postmaster --- it doesn't
> > wait around to see what happens.)  You need to look into the log
> > file to see what the problem was.
>
> Ok, it's a bit confusing for a non-admin-user to enter into the pg-
> config to find out what parameters one has to uncomment in order to
> get the log working. Succeed after some while, got the postmaster
> working:
>
>827  p2  S  0:00.22 /usr/local/bin/postmaster -D /usr/local/
> pgsql/data TERM=xterm-color SHELL=/bin/bash USER=postgres
> __CF_USER_TEXT_ENCODING=0x1F6:0:0 P
>829  p2  S  0:00.01 postgres: writer process
>830  p2  S  0:00.00 postgres: stats buffer process
>831  p2  S  0:00.00 postgres: stats collector process
>
>
> What would you recommend now in order to get the data back? I have
> postgis data in the databases too, although this is not too important
> for me...

Sounds like you're ready to run pg_dump.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] REINDEX on large DB vs. DROP INDEX/CREATE INDEX

2008-01-24 Thread Wes
On 1/24/08 12:48 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:

> Wes <[EMAIL PROTECTED]> writes:
>> I'm running 8.1.4.  Assume I have exclusive access to the DB.
> 
> You really ought to update to 8.1.something-newer, but I digress.

I was planning on upgrading to 8.x at the same time as this reindex - just
do a dump/reload, but...

I guess I've been asleep at the wheel and didn't realize 8.1.11 was out.
Since that wouldn't require a DB reload, I guess that would be highly
recommended?

>> 1. Is there any advantage to doing the DROP/CREATE over just doing a REINDEX
>> DATABASE.
> 
> No, not if you don't mind exclusive locks.  DROP together with CREATE
> INDEX CONCURRENTLY might be nicer if you were trying to do this without
> completely shutting down the DB, but if you aren't running normal
> operations then just use REINDEX.

I was thinking there was some clean up that didn't happen with REINDEX,
related to disk space allocation maybe?  Perhaps this was just on older 7.x
versions - something I vaguely recall back under 7.x when I was having some
corruption issues.

>> 3. With a REINDEX DATABASE, how can I monitor progress?
> 
> It should give you a NOTICE after each table.

Is there anything that shows up in ps for each index it is working on?

> BTW, what have you got maintenance_work_mem set to?

It is currently set to 983025.  Not sure where I got that strange number
from.  It's a 2 GB machine.  I've been trying to get more, but when it runs
fine day to day, it's kind of hard to justify.  Lots of disks, not so much
memory.

I guess I should also turn off fsync for the duration.

Wes



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Getting all tables into memory

2008-01-24 Thread Stefan Kaltenbrunner

Robert Fitzpatrick wrote:

I have a couple of servers running Postfix with amavisd-maia+SA+clamav
on FreeBSD 6.2 which use a central db server running PgSQL 8.2.4. My
issue is the bayes database causing SA TIMED OUT in the logs and want to
make sure I am getting everything into memory. The disk activity is high
on the db server, this is the average systat status...


The SA SQL-bayes implementation is one prime example of a real life 
application that can benefit from HOT because it is basically updating a 
 fairly limited set of non-indexes columns at an insane rate.
I have seen real live installations that could barly keep up with bloat 
even on a tight 3min vacuum cycle and the 8.3B4 test instance I have 
here can take at least 4 times the load than 8.1 could using that kind 
of workload.



Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-24 Thread Martijn van Oosterhout
On Thu, Jan 24, 2008 at 05:41:20PM +0100, Stefan Schwarzer wrote:
> What would you recommend now in order to get the data back? I have  
> postgis data in the databases too, although this is not too important  
> for me...

pg_dump, pg_dumpall...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] PostgreSQL/PHP Application Server

2008-01-24 Thread Shane Ambler

Brian A. Seklecki (Mobile) wrote:


*) A row-level / user-level ('view based') object security model


I haven't looked into it much but SE-PostgreSQL may be an option for 
this step - if not you may find their explanations of how they implement 
row level security to be inspiration.


http://code.google.com/p/sepgsql/



--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] close connection

2008-01-24 Thread Dominique Bessette - Halsema
is there a way to close the connection in postgres after i insert something?


Re: [GENERAL] Disk configurations....

2008-01-24 Thread Steve Atkins


On Jan 24, 2008, at 11:21 AM, Nathan Wilhelmi wrote:

Hello - Typically case of a software guy needing to spec hardware  
for a new DB server. Further typified case of not knowing exact  
amount of data and I/O patterns. So if you were to spec a disk  
system for a new general purpose PostgreSQL server any suggestions  
of what to start with?


Details I have:

1) We think 500GB is enough storage for the DB.
2) I would guess 66% read 33% write patterns, maybe closer to 50/50.

I would assume that we would want to go SCSI raid. Would you do the  
whole disk array as Raid 5 or whole you partition it up differently?  
Would you go with 3x300gb disks or would you use more smaller disks  
to get there?


Any other gotchas or suggestions to look for?


You really need to have some idea of how much performance you need. A  
sensible spec for 10tps is not the same as one for 1000tps.


If you need decent midrange performance, and don't want to spend a  
fortune on tweaking and hardware...


I'd avoid RAID5. It's efficient use of disks for redundancy, but not  
really the best for random write performance. RAID10 is more expensive  
in number of spindles, but a good trade.


A good disk controller, with a battery-backed writeback cache is a  
must, IMO, if you want to get decent performance without having to do  
a lot of tuning, segregating WAL logs to separate drive mirrors, and  
so on. Some people swear by software RAID, but the battery-backed  
cache buys you quite a lot by hiding fsync latency.


There are some pretty good SATA based systems out there, and  
performance is quite acceptable given a decent controller. High end  
3ware and Areca get good reviews from people here. Don't assume you  
have to go with SCSI. Also, more spindles are often better for  
performance, and you can get a lot more SATA spindles for a given  
budget than you can SCSI.


You might want to look at previous threads, mostly over in the - 
performance mailing list. It's a fairly common question.


Cheers,
  Steve




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] unique constraint

2008-01-24 Thread Erik Jones


On Jan 24, 2008, at 12:36 PM, Dominique Bessette - Halsema wrote:

I'm getting the following error, and I think it's because when i  
insert into postgres from geoserver's WFS, i first delete then  
insert the track, and geoserver does this in succession but very  
fast.  If I personally test inserting two tracks with the same guid  
in a row, then it works, but because geoserver is going so fast it  
blows up and gives this error.  i'm looking into fixing it another  
way but does any one have an idea of anything i can do to avoid  
this error in postgres?


STATEMENT: INSERT INTO  
"public"."tracks" ("guid","classification","creationtimestamp","lastup 
datetime","effectivetime","reportedtime","staletime","confidencevalue" 
,"behavioridentity","nationality","classificationcategory","classconfi 
dencevalue","axislabels","uomlabels","aoumajaxis","aouminaxis","aouval 
id","the_geom") VALUES ('OOSTZEE','U','2008-01-22T21:05:  
15.366Z','2008-01-22T21:05:15.366Z','2008-01-22T21:05:15.366Z','2008-0 
1-22T21:05:15.366Z','2008-01-22T21:05:15.366Z', 
1.0,'FRIEND','','SEA','1.0','Lat Long h','degree degree', 
999,999,'true',setSRID 
('014029D8E757928E0D404B9C28CBD1244A'::geometry,4326))


ERROR: duplicate key violates unique constraint "tracks_guid_key"



Perhaps you haven't wrapped these successive DELETE -> INSERT  
statements in a transaction?


BEGIN;
DELETE ...;
INSERT ...;
COMMIT;

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pg_xlog and standby - SOLVED

2008-01-24 Thread Roberto Scattini
On Jan 24, 2008 4:30 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Thu, 24 Jan 2008, Erik Jones wrote:
>
> > To disable archiving, set archive_command="", commenting it out won't
> > change it...However, commenting a config is not necessarily equivalent
> > to disabling something.
>
> This is actually a problem that is corrected in the upcoming 8.3 release.
> From those release notes:
>
> "Commenting out a parameter in postgresql.conf now causes it to revert to
> its default value.  [In 8.2 and earlier releases] commenting out an entry
> left the parameter's value unchanged until the next server restart."
>
> This is probably why Roberto didn't see the expected behavior until a
> server restart.
>

in fact, googling for the problem i found those release notes that led
me to think that the restart was necessary...
:D

-- 
Roberto Scattini
 ___ _
 ))_) __ )L __
((__)(('(( ((_)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Disk configurations....

2008-01-24 Thread Alan Hodgson
On Thursday 24 January 2008, Nathan Wilhelmi <[EMAIL PROTECTED]> wrote:
> Hello - Typically case of a software guy needing to spec hardware for a
> new DB server. Further typified case of not knowing exact amount of data
> and I/O patterns. So if you were to spec a disk system for a new general
> purpose PostgreSQL server any suggestions of what to start with?
>
> Details I have:
>
> 1) We think 500GB is enough storage for the DB.
> 2) I would guess 66% read 33% write patterns, maybe closer to 50/50.
>
> I would assume that we would want to go SCSI raid. Would you do the
> whole disk array as Raid 5 or whole you partition it up differently?
> Would you go with 3x300gb disks or would you use more smaller disks to
> get there?

RAID-5 is probably about the worst choice for a high-write activity 
database. Certainly pg_log should not be on a RAID-5 array. Also, RAID-5 is 
worse on smaller arrays than larger arrays.

SCSI (or SAS) is probably only worthwhile if you need 15K RPM drives.

More disks are usually better than fewer expensive disks.

Personally, I would get 8 new SATA-II drives, put them on a good SAS 
controller with a battery-protected write-back cache, and set them up in 
RAID-10. If you later find out you need more IOPs you can replace them with 
15K RPM SAS drives without throwing out the rest of the server.

Without any idea as to your transaction volume, though, this is a very 
general recommendation.

--
Alan


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] [ADMIN] Backup

2008-01-24 Thread Scott Marlowe
On Jan 24, 2008 1:08 PM, Dominic Carlucci <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
> We have a center in Europe who has just started to use PostgreSQL and
> was asking me if there are any Symantec product or other products that
> backup this type of database.   We presently run VERITAS ver9.1 on
> windows2003 server.  What is being used by users out there now.  We are
> thinking of upgrading to the latest Symantec backup exec software but am not
> sure if this version does backup PostgreSQL. When called for support they
> told me to go to the Symantec site and search the compatibility list. I
> didn't find anything dealing with PostgreSQL database.
> Any answer would help me simplify my backup situation.

Not sure if there's some kind of special plug in for Symantec, but
basically, you want to run pg_dump or pg_dumpall and then point
symantec to back up the file thus created.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] close connection

2008-01-24 Thread Bill Moran
"Dominique Bessette - Halsema" <[EMAIL PROTECTED]> wrote:
>
> is there a way to close the connection in postgres after i insert something?
> 

The short answer is "yes".

In order to provide a more detailed answer, I suspect you're going to have
to provide a little more context ... are you writing a C application, or
using the psql program or something else?

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] Backup

2008-01-24 Thread Phillip Smith
> We have a center in Europe who has just started to use PostgreSQL and was
> asking me if there are any Symantec product or other products that backup
> this type of database.

It doesn't appear to. I've just been through the whole rigmarole of
BackupExec for some Windows Servers, and I couldn't find anything to deal
with PostgreSQL.

Just dump Postgres to the File System and backup that dump. Depends on what
your Recovery Point requirements are.


THINK BEFORE YOU PRINT - Save paper if you don't really need to print this

***Confidentiality and Privilege Notice***

The material contained in this message is privileged and confidential to
the addressee.  If you are not the addressee indicated in this message or
responsible for delivery of the message to such person, you may not copy
or deliver this message to anyone, and you should destroy it and kindly
notify the sender by reply email.

Information in this message that does not relate to the official business
of Weatherbeeta must be treated as neither given nor endorsed by Weatherbeeta.
Weatherbeeta, its employees, contractors or associates shall not be liable
for direct, indirect or consequential loss arising from transmission of this
message or any attachments
e-mail.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] close connection

2008-01-24 Thread Dominique Bessette - Halsema
I'm http posting postgres from my wfs.  in otherwords i'm postgres is my
datastore and i'm posting xml to it.  so do you know how to close it in xml?

On 1/24/08, Bill Moran <[EMAIL PROTECTED]> wrote:
>
> "Dominique Bessette - Halsema" <[EMAIL PROTECTED]> wrote:
> >
> > is there a way to close the connection in postgres after i insert
> something?
> >
>
> The short answer is "yes".
>
> In order to provide a more detailed answer, I suspect you're going to have
> to provide a little more context ... are you writing a C application, or
> using the psql program or something else?
>
> --
> Bill Moran
> http://www.potentialtech.com
>


Re: [GENERAL] Disk configurations....

2008-01-24 Thread Martin Gainty
64 bit?
http://www.postgresql.org/docs/8.2/interactive/install-procedure.html

M-
- Original Message - 
From: "Nathan Wilhelmi" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, January 24, 2008 2:21 PM
Subject: [GENERAL] Disk configurations


> Hello - Typically case of a software guy needing to spec hardware for a 
> new DB server. Further typified case of not knowing exact amount of data 
> and I/O patterns. So if you were to spec a disk system for a new general 
> purpose PostgreSQL server any suggestions of what to start with?
> 
> Details I have:
> 
> 1) We think 500GB is enough storage for the DB.
> 2) I would guess 66% read 33% write patterns, maybe closer to 50/50.
> 
> I would assume that we would want to go SCSI raid. Would you do the 
> whole disk array as Raid 5 or whole you partition it up differently? 
> Would you go with 3x300gb disks or would you use more smaller disks to 
> get there?
> 
> Any other gotchas or suggestions to look for?
> 
> Thanks!
> 
> -Nate
> 
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org/
> 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] close connection

2008-01-24 Thread Erik Jones


On Jan 24, 2008, at 3:47 PM, Dominique Bessette - Halsema wrote:

is there a way to close the connection in postgres after i insert  
something?



Of course.  Exactly how will be dependent on what you're using to  
access the db (language/framework-wise).


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Disk configurations....

2008-01-24 Thread Bill Moran
> - Original Message - 
> From: "Nathan Wilhelmi" <[EMAIL PROTECTED]>
> To: 
> Sent: Thursday, January 24, 2008 2:21 PM
> Subject: [GENERAL] Disk configurations
> 
> 
> > Hello - Typically case of a software guy needing to spec hardware for a 
> > new DB server. Further typified case of not knowing exact amount of data 
> > and I/O patterns. So if you were to spec a disk system for a new general 
> > purpose PostgreSQL server any suggestions of what to start with?
> > 
> > Details I have:
> > 
> > 1) We think 500GB is enough storage for the DB.
> > 2) I would guess 66% read 33% write patterns, maybe closer to 50/50.
> > 
> > I would assume that we would want to go SCSI raid. Would you do the 
> > whole disk array as Raid 5 or whole you partition it up differently? 
> > Would you go with 3x300gb disks or would you use more smaller disks to 
> > get there?

Do NOT use RAID5.  Use RAID 10.  If you can afford it, get a SCSI RAID
controller with battery-backed cache and 15,000 RPM drives.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] DB wide Vacuum(Goes thru readonly tables) vs Autovacuum

2008-01-24 Thread Ow Mun Heng
I'm currently seeing more and more problems with vacuum as the DB size
gets bigger and bigger. (~220GB+)

Bear in mind that I'm working on a fairly big DB with unfairly sized
hardware (Celeron 1.7G, 2x500G Raid1 dbspace1, 1x500Gb dbspace2, 1x80G
system, 768MB Ram, 2G Swap on dspace2)

IO is main bottleneck when doing the vacuum and I've had vacuum stuck on
a particular 5gb table for over 5 hours w/o moving and I've to kill the
entire DB and restart)

Right now, I've already implemented partitioning of some of the huge
tables (weekly) and moved the older ones to read-only tables which does
not have updates/deletes etc.

I'm doing both autovacuum and nightly vacuum. The nightly vacuum is DB
wide (so tht I can see where is the FSM) and this is un-necessary(?)
hitting the read-only tables. Is there a way to specify it to _not_
vacuum those tables and yet still give me the FSM? (if I vacuum based on
per-table via vacuumdb, I won't get the FSM information)

autovacuum = on # enable autovacuum subprocess?
autovacuum_vacuum_threshold = 200   # min # of tuple updates before
autovacuum_vacuum_scale_factor = 0.03   # fraction of rel size before
autovacuum_analyze_scale_factor = 0.02  # fraction of rel size before


I've bumped maintenance_work_mem from 32 to 64 and now to 128mb and I've
stopped all activity on the DB while I'm vacuuming and I see that
setting it to 128MB is keeping things zippy.

Any comments would be appreciated on how best to tune this. (with the
aforementioned hardware limitation)


ps : Shared buffers = 200mb
effective cache size = 350mb


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] match accented chars with ASCII-normalised version

2008-01-24 Thread brian
The client for a web application I'm working on wants certain URLs to 
contain the full names of members ("SEO-friendly" links). Scripts would 
search on, say, a member directory entry based on the name of the 
member, rather than the row ID. I can easily join first & last names 
with an underscore (and split on that later) and replace spaces with +, 
etc. But many of the names contain multibyte characters and so the URLs 
would become URL-encoded, eg:


Adelina España -> Adelina_Espa%C3%B1a

The client won't like this (and neither will I).

I can create a conversion array to replace certain characters with 
'normal' ones:


Adelina_Espana

However, I then run into the problem of trying to match 'Espana' to 
'España'. Searching online, I found a few ideas (soundex, intuitive 
fuzzy something-or-other) but mostly they seem like overkill for this 
application.


The best I can come up with is to add a 'link_name' column to the table 
 that holds the 'normalised' version of the name ('Adelina_Espana', or 
even 'adelina_espana'). The duplication bugs me a little but the table 
currently stands at a whopping ~3500 names, so I'm not too concerned.


My question is: well, does this look like the way to go, considering 
it's just a web app (and isn't likely to ever top 1 names)? Or is 
there something clever (yet not overkill) that I'm missing?


If I do go this route, I'd add an insert/update trigger to call a 
function (PL/Perl, I'm looking at you) that handles the conversion to 
link_name.


brian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] can't create index with 'dowcast' row

2008-01-24 Thread Tom Lane
Louis-David Mitterrand <[EMAIL PROTECTED]> writes:
>   CREATE UNIQUE INDEX visit_idx ON visit_buffer USING btree (id_session, 
>   id_story, created_on::date);

>   psql:visit_pkey.sql:5: ERROR:  syntax error at or near "::"

The reason that didn't work is that you need parentheses around an index
expression (otherwise the CREATE INDEX syntax would be ambiguous).

>   CREATE UNIQUE INDEX visit_idx ON visit_buffer USING btree (id_session, 
> id_story, extract(date from created_on));
>   psql:visit_pkey.sql:4: ERROR:  functions in index expression must be 
> marked IMMUTABLE

I take it created_on is timestamp with time zone, not plain timestamp?
The problem here is that the coercion to date is not immutable because
it depends on the timezone setting.  (The other way would have failed
too, once you got past the syntax detail.)  You need to figure out
what your intended semantics are --- in particular, whose idea of
midnight should divide one day from the next --- and then use a
unique index on something like

((created_on AT TIME ZONE 'Europe/Paris')::date)

Note that the nearby recommendation to override the immutability
test with a phonily-immutable wrapper function would be a real bad
idea, because such an index would misbehave anytime someone changed
their timezone setting.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match