Re: [GENERAL] Is it possible to findout actual owner of table?

2010-03-09 Thread dipti shah
Hmm...that would be too much work I think. Ayway, could you point me some
useful link for postgresql middleware?

Thanks.

On Tue, Mar 9, 2010 at 1:13 PM, John R Pierce  wrote:

> dipti shah wrote:
>
>> What is middleware?
>>
>
> An application server that does all the business logic.  your user software
> calls the application server to do things, and it in turn accesses the
> database.User written software is not alloweed to directly connecct to
> the databases at all.   Classic example of a middleware platform Apache
> Tomcat
>
>
>


[GENERAL] Unexpected result from selecting an aliased but non-existing column called "name"

2010-03-09 Thread Ian Barwick
Hi

I was wondering where some spurious information in a query was
coming from - it looked like something was inserting all the
values of a table row as a comma-separated list.

It turns out I was attempting to reference a non-existent column
called (literally) "name", which instead of throwing an error produces
the aforementioned list. This only seems to happen with "name", and
only if it is referenced with the table name or alias.


To reproduce:

  test=> CREATE table xx(id int, val text);
  CREATE TABLE
  test=> INSERT INTO xx values(1,'hello world');
  INSERT 0 1
  test=> SELECT name FROM xx;
  ERROR:  column "name" does not exist
  LINE 1: SELECT name FROM xx;

  test=> SELECT xx.name from xx;
 name
  ---
   (1,"hello world")
  (1 row)

  test=> SELECT xx.foobar FROM xx;
  ERROR:  column xx.foobar does not exist
  LINE 1: SELECT xx.foobar FROM xx;


Reproducible on 8.4.1 and 8.4.2; does not work in 8.3.1
(old test version I happen to have hanging around).


Questions:
- is this a feature?
- and if so, where is it documented?
  (given that the key word in this is "name", this is a tricky one
   to research).


Thanks for any pointers.


Ian Barwick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] has_schema_privilege function

2010-03-09 Thread dipti shah
Hi,

I refered
http://www.postgresql.org/docs/8.4/static/functions-info.html#FUNCTIONS-INFO-ACCESS-TABLE
and
got to know that I can fetch the information about permissions on schema.

Below commands work fine:

SELECT has_schema_privilege('user1', 'mydb', 'USAGE');
SELECT has_schema_privilege('user1', 'mydb', 'CREATE');

...But I want to check whether user has ALL permissions or not. Below
doesn;t work.

SELECT has_schema_privilege('user1', 'mydb', 'ALL');

Do I have to check both USAGE and CREATE permissions to check ALL
permissions?

Thanks.


Re: [GENERAL] kernel version impact on PostgreSQL performance

2010-03-09 Thread Rodger Donaldson

Cyril Scetbon wrote:

Does anyone know what can be the differences between linux kernels
2.6.29 and 2.6.30 that can cause this big difference (TPS x 7 !)
http://www.phoronix.com/scan.php?page=article&item=linux_2624_2633&num=2


http://www.csamuel.org/2009/04/11/default-ext3-mode-changing-in-2630

Most likely.

I shall bite my tongue how I feel about a lot of the ext3/ext4 
shenanigans around this time.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can we overload = operator to word numeric = text

2010-03-09 Thread venkatrao . b
Hello,

We have migrated oracle  database to postgre. 
In oracle char to numeric type conversion is explicit (i.e. we can compare 
char = numeric); but in postgre it is throwing errors. There are so many 
functions - around 2000, and we can not go and do explict type casting in 
every function , where there is problem.

Is there any way to come out of this problem. I mean is there any way to 
make = operator work for numeric = charater varying .
Your suggestions are highly appreciated.

Venkat
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: [GENERAL] Can we overload = operator to word numeric = text

2010-03-09 Thread venkatrao . b
Dear Pavel,

Thanks a lot...

It worked.

Regards,
Venkat



From:
Pavel Stehule 
To:
venkatra...@tcs.com
Cc:
pgsql-nov...@postgresql.org, pgsql-general@postgresql.org
Date:
03/09/2010 04:07 PM
Subject:
Re: [GENERAL] Can we overload = operator to word numeric = text



2010/3/9  :
>
> Hello,
>
> We have migrated oracle  database to postgre.
> In oracle char to numeric type conversion is explicit (i.e. we can 
compare
> char = numeric); but in postgre it is throwing errors. There are so many
> functions - around 2000, and we can not go and do explict type casting 
in
> every function , where there is problem.
>
> Is there any way to come out of this problem. I mean is there any way to
> make = operator work for numeric = charater varying .
> Your suggestions are highly appreciated.

try

create or replace function num_text_eq(numeric, varchar)
returns bool as $$
  select $1 = $2::numeric$$
language sql;

create operator = ( leftarg=numeric,  rightarg=varchar, 
procedure=num_text_eq);
postgres=# select 10='10';
 ?column?
--
 t
(1 row)

regards
Pavel Stehule




>
> Venkat
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
>


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




[GENERAL] warm standby possible with 8.1?

2010-03-09 Thread zhong ming wu
Dear List

Is it possible to have a warm standby with 8.1?  I have set up log
shipping to standby server but it seems that i cannot make it read any
subsequent
walfiles except the first one played with initial data.With pg_standy
with 8.2+ it is so easy to make it work but I would also like to have
a warm standby for this 8.1 server.

My set up is to monitor data/ dir for recovery.done.  If it is there,
stop the server, rename recovery.done to recovery.conf and then start
the server.
Will this work as a warm standy if recovery.conf contains a restore command?

restore_command='cp walfiles/%f %p'

Thanks

mr.wu

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
Wish I could Tom.  I need a non-production, read-write copy of the
database that is updated every 1-2 hours from production. I don't set
this requirement, the business does. I just have to do it if it's
technically possible.

I found a way to do it very easily using LVM snapshots and WAL log
shipping, but the net effect is I'm bringing a new LVM snapshot copy of
the database out of recovery every 1-2 hours.  That means I'd have to
spend 15 minutes, or one-quarter of the time, doing an analyze every
time I refresh the database.  That's fairly painful.  The LVM snap and
restart only takes 1-2 minutes right now.  

If you have any other ideas how I can accomplish or improve this I'm all
ears.  

Thanks,

Scot Kreienkamp
skre...@la-z-boy.com

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Monday, March 08, 2010 10:32 PM
To: Scot Kreienkamp
Cc: Scott Mead; pgsql-general@postgresql.org
Subject: Re: [GENERAL] autovacuum question 

"Scot Kreienkamp"  writes:
>> Why not just add an 'analyze' as the last step of the restore job?

> Due to the amount of time it takes.  The disks are slow enough to make
a
> database-wide analyze painful since I would have to repeat it every
1-2
> hours, IE every reload time.  

You claimed that before.  It didn't make any sense then and it doesn't
now.  There is no way that an analyze is expensive compared to a
database reload.

Maybe what you need to be doing is rethinking the strategy that involves
reloading every couple of hours...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected result from selecting an aliased but non-existing column called "name"

2010-03-09 Thread Adrian Klaver
On Tuesday 09 March 2010 12:50:45 am Ian Barwick wrote:
> Hi
>
> I was wondering where some spurious information in a query was
> coming from - it looked like something was inserting all the
> values of a table row as a comma-separated list.
>
> It turns out I was attempting to reference a non-existent column
> called (literally) "name", which instead of throwing an error produces
> the aforementioned list. This only seems to happen with "name", and
> only if it is referenced with the table name or alias.
>
>
>
>
> Ian Barwick

See this recent thread for explanation:
http://archives.postgresql.org/pgsql-general/2010-02/msg01038.php

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Urgent help needed- alias name in update statement

2010-03-09 Thread venkatrao . b
Hello,

In postgre, when i am trying to give alias name in update statement like 
below -

-
update mytable x
set x.name = 'asdf'
where x.no = 1 
---

is giving error - mytable is not having col x.

We have migrated code from oracle to postgre 8.4. Is there any solution 
for this.
(functions were compiled without any compilation errors - now when we are 
trying to run these we are getting problems)

Please help..

Venkat
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: [GENERAL] autovacuum question

2010-03-09 Thread Bryan Montgomery
Could you have two of these non-production databases? Prepare one in the
background, including an analyze and then make it the 'live' non-production
database then use the offline / alternative database for the next load
prepare that and then switch it on when ready.

In this scenario you'd need twice the disk space I guess but the 'downtime'
would be a lot less. I'd imagine you could have databases on different ports
and switch them at the roll over point, or maybe even just different
database names.

Just an alternative idea to throw out there.

Bryan.

On Tue, Mar 9, 2010 at 8:47 AM, Scot Kreienkamp wrote:

> Wish I could Tom.  I need a non-production, read-write copy of the
> database that is updated every 1-2 hours from production. I don't set
> this requirement, the business does. I just have to do it if it's
> technically possible.
>
> I found a way to do it very easily using LVM snapshots and WAL log
> shipping, but the net effect is I'm bringing a new LVM snapshot copy of
> the database out of recovery every 1-2 hours.  That means I'd have to
> spend 15 minutes, or one-quarter of the time, doing an analyze every
> time I refresh the database.  That's fairly painful.  The LVM snap and
> restart only takes 1-2 minutes right now.
>
> If you have any other ideas how I can accomplish or improve this I'm all
> ears.
>
> Thanks,
>
> Scot Kreienkamp
> skre...@la-z-boy.com
>
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Monday, March 08, 2010 10:32 PM
> To: Scot Kreienkamp
> Cc: Scott Mead; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] autovacuum question
>
> "Scot Kreienkamp"  writes:
> >> Why not just add an 'analyze' as the last step of the restore job?
>
> > Due to the amount of time it takes.  The disks are slow enough to make
> a
> > database-wide analyze painful since I would have to repeat it every
> 1-2
> > hours, IE every reload time.
>
> You claimed that before.  It didn't make any sense then and it doesn't
> now.  There is no way that an analyze is expensive compared to a
> database reload.
>
> Maybe what you need to be doing is rethinking the strategy that involves
> reloading every couple of hours...
>
>regards, tom lane
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] autovacuum question

2010-03-09 Thread Greg Stark
On Tue, Mar 9, 2010 at 1:47 PM, Scot Kreienkamp  wrote:
> I found a way to do it very easily using LVM snapshots and WAL log
> shipping, but the net effect is I'm bringing a new LVM snapshot copy of
> the database out of recovery every 1-2 hours.  That means I'd have to
> spend 15 minutes, or one-quarter of the time, doing an analyze every
> time I refresh the database.  That's fairly painful.  The LVM snap and
> restart only takes 1-2 minutes right now.

Your snapshot should have the same stats that the server does, so this
doesn't actually seem to explain the discrepancy.

You be running into performance problems with LVM if the snapshot is
the one paying the price for all the CoW copies. Or it could be that
doing retail block copies as needed results in them being fragmented
and destroying the sequential scan performance. You might be able to
reduce the difference by making sure to do a vacuum and a checkpoint
immediately prior to the snapshot. That would hopefully achieve
setting most hint bits so that read-only queries on the snapshot don't
cause writes to blocks just to set them.

There might be an option in LVM to materialize the entire snapshot
which might be able to bring the performance up to the same level and
hopefully allocate all the blocks sequentially.


-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How many file descriptors does postgres need?

2010-03-09 Thread Royce Ausburn
G'day all,

We recently had a bit of a catastrophe when one of our postgres databases 
opened too many files.  It was a reasonably easy fix, but it did get me 
thinking.  Is there a rule of thumb in determining how many file descriptors 
should be available to a postgres database/cluster?

I'd be happy to set it to some large number, but I'd really like some 
confidence about how much the database can grow with a new limit.

Cheers!

--Royce


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
I'm gonna take a scientific wild-assed guess that the real issue here
is caching, or more specifically, lack thereof when you first start up
your copy of the db.

[Scot Kreienkamp] 
That is definitely one of the problems.  No way to help that that I'm
aware of.  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How many file descriptors does postgres need?

2010-03-09 Thread Steve Atkins

On Mar 9, 2010, at 5:11 AM, Royce Ausburn wrote:

> G'day all,
> 
> We recently had a bit of a catastrophe when one of our postgres databases 
> opened too many files.  It was a reasonably easy fix, but it did get me 
> thinking.  Is there a rule of thumb in determining how many file descriptors 
> should be available to a postgres database/cluster?
> 
> I'd be happy to set it to some large number, but I'd really like some 
> confidence about how much the database can grow with a new limit.

Postgresql is supposed to cope with file descriptor limits quite happily, as 
long as the OS actually restricts the number of files a process can have open. 
If it doesn't restrict the number, just falls over if there's too many, there's 
a postgresql configuration option to limit it (max_files_per_process, I think).

What OS are you on, and what was the catastrophe?

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can we overload = operator to word numeric = text

2010-03-09 Thread Pavel Stehule
2010/3/9  :
>
> Hello,
>
> We have migrated oracle  database to postgre.
> In oracle char to numeric type conversion is explicit (i.e. we can compare
> char = numeric); but in postgre it is throwing errors. There are so many
> functions - around 2000, and we can not go and do explict type casting in
> every function , where there is problem.
>
> Is there any way to come out of this problem. I mean is there any way to
> make = operator work for numeric = charater varying .
> Your suggestions are highly appreciated.

try

create or replace function num_text_eq(numeric, varchar)
returns bool as $$
  select $1 = $2::numeric$$
language sql;

create operator = ( leftarg=numeric,  rightarg=varchar, procedure=num_text_eq);
postgres=# select 10='10';
 ?column?
--
 t
(1 row)

regards
Pavel Stehule




>
> Venkat
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
 

ISTM that 9.0's read-only standby feature may be of use to you.  I know
it doesn't help you *today* but have you looked at it yet?

 

 

   Okay, so the RO database won't work.  How much data are we talking?
How much growth do you see between snapshots?

 

The initial database size is 31 gigs.  I give it 5 gigs in the snapshot
to grow in, but I would be surprised if more than 200 megs of data
changes.  The actual change rate should be very low.  



Re: [GENERAL] Libpq: copy file to bytea column

2010-03-09 Thread Albe Laurenz
seiliki wrote:
>>> The data types of tableout.c1 and tablein.c1 are both bytea. 

[...]

>>> However, I get the following errors from log when calling 
>>> libpq functions PQputCopyData() and PQputCopyEnd().
>>> 
>>> 2010-03-06 20:47:42 CST ERROR:  invalid byte sequence for encoding "UTF8": 
>>> 0x00

[...]

> > When you use COPY FROM STDIN, you send data from the client to the
> > server, which is something quite different. Data sent from the client
> > are expected to be in the client encoding, not in the server encoding.

[...]

> Changing client encoding can not make PQputCopyData() work, 
> either. The problem seems to be caused by the fact that 
> PQputCopyData() does not accept zero value  ('\0' in C), 
> which I think can not be encoded to any character regardless 
> of which client encoding is used.
> 
> Yes, the data I am trying to import is binary data. That is 
> why I use bytea to store it.
> 
> What I can't understand is why PQputCopyData() encodes 
> incoming character string, which was exported by "COPY ... TO 
> '/tmp/t'" SQL command under psql prompt, for bytea columns 
> and refuse to accept or misinterpret zero value.

Sorry, that was my mistake. I did not see that you said that the
columns were bytea. So you can safely ignore everything I said before.

> Below is the more or less self contained code that also 
> yields the same problem.
> 
> #define BYTEA "abcd\\011\\000ABCDEFGHIJKLMNO"

I looked it again, and the problem is the number of backslashes.

You are using text mode COPY, not binary COPY, so all the
non-printable characters are escaped with backslash sequences.

As described in
http://www.postgresql.org/docs/8.4/static/datatype-binary.html
certain octets must be escaped.
The problem is that there are two levels of escaping:
First there is some escaping for text literals, and then there
is the bytea escape syntax.

So for the zero octet, \000 would be the bytea escape, and since
backslash is a special character for text literals, it must be
escaped itself.

Ok, so double backslash.

Now since you represent the string as a literal in C, you have a third
level of escaping. So each backslash must again be represented by a
double backslash.

This leads to C strings like "000".

Your "\\000" would send '\000' to PostgreSQL, which the string
parser interprets as character zero, see
http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS

This is rejected as string constant right away.

[...]

> Another issue I can not understand is that both 
> PQputCopyData() and PQputCopyEnd() always return 1 even 
> though the copy operation actually fails. That being said, 
> the above code always shows "Done" but actually error occurs.

As described in
http://www.postgresql.org/docs/current/static/libpq-copy.html
you'll have to call PQgetResult after PQputCopyEnd to get the
result of the COPY statement.

Here's my modified version of your sample that works for me:

#include 
#include 
#include 

#define BYTEA "abcd011000ABCDEFGHIJKLMNO"

int main()
{
PGconn *PgConnect;
PGresult *result;
PgConnect = PQconnectdb("");
if (!PgConnect)
puts("Fail to connect.");
else {
result = PQexec(PgConnect,"COPY tablein FROM STDIN");
if (PGRES_COPY_IN == PQresultStatus(result)) {
if (PQputCopyData(PgConnect,BYTEA,strlen(BYTEA)) == 1) {
if (PQputCopyEnd(PgConnect,NULL) == 1) {
PQclear(result);
result = PQgetResult(PgConnect);
if (PGRES_COMMAND_OK == PQresultStatus(result))
puts("Done");
else
puts(PQerrorMessage(PgConnect));
}
else
puts(PQerrorMessage(PgConnect));
}
else
puts(PQerrorMessage(PgConnect));
}
else
puts(PQerrorMessage(PgConnect));
PQfinish(PgConnect);
}
}

The problems I spotted in your version:
- there must be four backslashes in the #definition
- you forgot to call PQgetResult
- the second argument of PQputCopyData must be the length of the buffer,
  not the length of the resulting bytea field (which is 21, by the way)
- you did not free the memory of "result"

Note that my sample also cannot serve as a textbook example, since
for example PQexec could return a NULL result if it runs out of memory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scott Mead
On Tue, Mar 9, 2010 at 10:01 AM, Scott Mead wrote:

>
> On Tue, Mar 9, 2010 at 9:56 AM, Scott Marlowe wrote:
>
>> On Tue, Mar 9, 2010 at 6:47 AM, Scot Kreienkamp 
>> wrote:
>> > Wish I could Tom.  I need a non-production, read-write copy of the
>> > database that is updated every 1-2 hours from production. I don't set
>> > this requirement, the business does. I just have to do it if it's
>> > technically possible.
>> >
>> > I found a way to do it very easily using LVM snapshots and WAL log
>> > shipping, but the net effect is I'm bringing a new LVM snapshot copy of
>> > the database out of recovery every 1-2 hours.  That means I'd have to
>> > spend 15 minutes, or one-quarter of the time, doing an analyze every
>> > time I refresh the database.  That's fairly painful.  The LVM snap and
>> > restart only takes 1-2 minutes right now.
>> >
>> > If you have any other ideas how I can accomplish or improve this I'm all
>> > ears.
>>
>> I'm gonna take a scientific wild-assed guess that the real issue here
>> is caching, or more specifically, lack thereof when you first start up
>> your copy of the db.
>>
>
> ISTM that 9.0's read-only standby feature may be of use to you.  I know it
> doesn't help you *today* but have you looked at it yet?
>


   Okay, so the RO database won't work.  How much data are we talking?  How
much growth do you see between snapshots?

   --Scott M


Re: [GENERAL] log_statement and syslog severity

2010-03-09 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


Bruce replied:
...
>> This means that, even using syslog as a destination, it's not possible for
>> me to filter statements without some sort of log-text parsing, which I'd
>> prefer to avoid on effort, performance and data-integrity grounds.

> Our logging system is very flexible, but not work-free on the user end. 
> I don't see us changing things in that area.

Bruce, that's a little harsh, I think the original poster has a legitimate 
request. Personally, I'd love to be able to split the logs on various things, 
the most important to me being durations and per-database. I looked at the 
code about a year ago to see how hard this would be and found it non-trivial
(for me), as we're really assuming hard things go to a single filehandle.
It's definitely an area for improvement, and should be a TODO if not already.

- -- 
Greg Sabino Mullane g...@turnstep.com
PGP Key: 0x14964AC8 201003091022
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkuWZ88ACgkQvJuQZxSWSsgVvgCfaWaOqcJEzfKBQiN5ttvU/EMB
lVYAn1Ud1AccynciWGvhfqIInNZK1+DZ
=qEnX
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Entering a character code in a query

2010-03-09 Thread John Gage
I would just like to thank Albe and Jasen for their responses.  What an
extraordinary environment Postgres is!  Human and computing.

On Tue, Mar 9, 2010 at 1:32 PM, Albe Laurenz wrote:

> John Gage wrote:
> > I would like to use the following query:
> >
> > SELECT english || '\n' || english || '\x2028' || french AS
> > output FROM vocab_words_translated;
> >
> > where \x2028 is the hexadecimal code for a soft carriage return.
> >
> > However, this does not work.
> >
> > Can anyone help with this problem?
>
> If you have PostgreSQL 8.4 with standard_conforming_strings = on,
> you could write:
>
> english || E'\n' || english || U&'\2028' || french
>
> Otherwise, you have to resort to
>
> english || E'\n' || english || chr(8232) || french
>
> (provided your server_encoding is UTF8).
>
> Yours,
> Laurenz Albe
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Entering a character code in a query

2010-03-09 Thread Albe Laurenz
John Gage wrote:
> I would like to use the following query:
> 
> SELECT english || '\n' || english || '\x2028' || french AS 
> output FROM vocab_words_translated;
> 
> where \x2028 is the hexadecimal code for a soft carriage return.
> 
> However, this does not work.
> 
> Can anyone help with this problem?

If you have PostgreSQL 8.4 with standard_conforming_strings = on,
you could write:

english || E'\n' || english || U&'\2028' || french

Otherwise, you have to resort to

english || E'\n' || english || chr(8232) || french

(provided your server_encoding is UTF8).

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How many file descriptors does postgres need?

2010-03-09 Thread Tom Lane
Steve Atkins  writes:
> Postgresql is supposed to cope with file descriptor limits quite happily, as 
> long as the OS actually restricts the number of files a process can have 
> open. If it doesn't restrict the number, just falls over if there's too many, 
> there's a postgresql configuration option to limit it (max_files_per_process, 
> I think).

Usually the problem is not that Postgres falls over, but that everything
else on the box starts to fall over once PG has run the kernel out of
file descriptors.  Make sure max_connections * max_files_per_process is
less than the kernel's file table size, with enough daylight for the
machine's other activities.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] has_schema_privilege function

2010-03-09 Thread Jignesh Shah
Hi, I have created below function. I am checking return value of
has_schema_privilege by using flag="f". I think this is not strong way to
make a check because if in future "f" becomes "false" my stored procedure
will work improper.

Could you tell me is there any other robust way to make sure that user1
doesn't have CREATE permissions on mydb schema?

CREATE OR REPLACE FUNCTION schema_perm_test()
  RETURNS void AS
$BODY$
$rv = spi_exec_query("SELECT has_schema_privilege('user1', 'mydb',
'CREATE') AS flag;");
if(lc($rv->{rows}->[0]->{flag}) eq "f") {
 # Do tasks
}
$BODY$
  LANGUAGE 'plperl' VOLATILE SECURITY DEFINER

Thanks.


[GENERAL] Update view/table rule order of operations or race condition

2010-03-09 Thread Dan Fitzpatrick
I have a table with a trigger that inserts records into a second table  
on insert. I have a view that has a rule that inserts a record into  
the first table then updates the records that the first table's  
trigger inserted into the second table. From what I can see, when the  
view rule executes, the records in the second table are not yet  
inserted so they do not get updated. Please see code below (simplified  
for readability). Any ideas on why this is or another solution would  
be appreciated. Thanks.


-- Tables
CREATE TABLE a (a_id SERIAL, name VARCHAR(10), type_id INT);
CREATE TABLE item_type (item_type_id SERIAL, a_type_id INT, type_name  
VARCHAR(10));
CREATE TABLE a_item (a_item_id SERIAL, a_id INT, type_id INT, val  
NUMERIC);


-- Add some type data
insert into item_type values (default,6,'quantity');
insert into item_type values (default,6,'price');
insert into item_type values (default,6,'discount');
-- Show type data
select * from item_type;
-- item_type_id | a_type_id | type_name 
+---+---

--1 | 6 | quantity
--2 | 6 | price
--3 | 6 | discount
--(3 rows)


-- Trigger function to add items
CREATE LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION add_items()
  RETURNS "trigger" AS
$BODY$BEGIN
INSERT INTO a_item (a_id, type_id)
SELECT  a.a_id,
item_type.item_type_id
FROMa
JOINitem_type
ON  a.type_id = item_type.a_type_id
WHERE   a.a_id = NEW.a_id;
RETURN NULL;
END;$BODY$
LANGUAGE 'plpgsql' VOLATILE;

-- Create trigger
CREATE TRIGGER insert_a_items
  AFTER INSERT
  ON a
  FOR EACH ROW
  EXECUTE PROCEDURE add_items();

-- Insert an "a" record
insert into a values (default,'Test',6);
-- Trigger works
select * from a_item;
-- a_item_id | a_id | type_id | val 
-+--+-+-
-- 1 |1 |   1 |
-- 2 |1 |   2 |
-- 3 |1 |   3 |
--(3 rows)


-- The view is a flattened version of the "a" and "a_item" table for a  
specific case

CREATE OR REPLACE VIEW options AS
 SELECT a.a_id AS options_id, a.name AS options_name,
   (SELECT a_item.val
FROM   a_item
WHERE  a_item.a_id = a.a_id AND a_item.type_id = 1) AS quantity,
   (SELECT a_item.val
FROM   a_item
WHERE  a_item.a_id = a.a_id AND a_item.type_id = 2) AS price,
   (SELECT a_item.val
FROM   a_item
WHERE  a_item.a_id = a.a_id AND a_item.type_id = 3) AS discount
  FROM a
  WHERE a.type_id = 6;

-- View output
 select * from options;
-- options_id | options_name | quantity | price | discount 
--+--+--+---+--
--  1 | Test |  |   | 
--(1 row)



-- The rule inserts into the "a" table and then updates the "a_item" 
-- table with the records that the trigger is supposed 
-- to execute before the next commands are run.

CREATE OR REPLACE RULE insert_options AS
  ON INSERT TO options DO INSTEAD
-- Insert into table "a"
(INSERT INTO a (a_id, type_id, name)
   VALUES (new.options_id, 6, new.options_name);
-- Update 3 records in table "a_item"
-- This doesn't work because the records do not appear to exist  
yet?

 UPDATE a_item SET val = new.quantity
   WHERE a_item.a_id = new.options_id AND a_item.type_id = 1;
 UPDATE a_item SET val = new.price
   WHERE a_item.a_id = new.options_id AND a_item.type_id = 2;
 UPDATE a_item SET val = new.discount
   WHERE a_item.a_id = new.options_id AND a_item.type_id = 3;
);

-- Insert into the view
insert into options values (nextval('a_a_id_seq'),'Test 2',1,2,3);

-- View data
select * from options;
 options_id | options_name | quantity | price | discount
+--+--+---+--
  1 | Test |  |   |
  2 | Test 2   |  |   |
(2 rows)

If I change the insert_options rule to insert into a_item, then I get  
6 records in a_item (3 from insert_a_items and 3 from insert_options).  
The first 3 have null vals and the second 3 have the correct vals.


It should be:

 options_id | options_name | quantity | price | discount
+--+--+---+--
  1 | Test |  |   |
  2 | Test 2   |1 | 2 |3

Any ideas why this is or if there is another approach?

Thanks,

Dan




Re: [GENERAL] kernel version impact on PostgreSQL performance

2010-03-09 Thread Greg Smith

Rodger Donaldson wrote:

Cyril Scetbon wrote:

Does anyone know what can be the differences between linux kernels
2.6.29 and 2.6.30 that can cause this big difference (TPS x 7 !)
http://www.phoronix.com/scan.php?page=article&item=linux_2624_2633&num=2


http://www.csamuel.org/2009/04/11/default-ext3-mode-changing-in-2630


Yeah, I realized I answered the wrong question--Cyril wanted to know 
"why was 2.6.30 so much faster?", not "why did 2.6.33 get so much 
slower?", which is what I was focusing on.  There's a good intro to what 
happened to speed up 2.6.30 at http://lwn.net/Articles/328363/ , with 
the short version being "the kernel stopped caring about data integrity 
at all in 2.6.30 by switching to writeback as its default". 

The give you an idea how wacky this is, less than a year ago Linus 
himself was ranting about how terrible that specific implementation 
was:  http://lkml.org/lkml/2009/3/24/415 
http://lkml.org/lkml/2009/3/24/460 and making it the default exposes a 
regression to bad behavior to everyone who upgrades to a newer kernel.


I'm just patiently waiting for Chris Mason (who works for Oracle--they 
care about doing the right thing here too) to replace Ted Tso as the 
person driving filesystem development in Linux land.  That his 
"data=guarded" implementation was only partially merged into 2.6.30, and 
instead combined with this awful default change, speaks volumes about 
how far the Linux development priorities are out of sync (pun intended) 
with what database users expect.  See 
http://www.h-online.com/open/news/item/Kernel-Log-What-s-coming-in-2-6-30-File-systems-New-and-revamped-file-systems-741319.html 
for a summary on how that drama played out.  I let out a howling laugh 
when reading this was because "The rest have been put on hold, with the 
development cycle already entering the stabilisation phase."  Linux 
kernel development hasn't had a stabilization phase in years.


It's interesting that we have pgbench available as a lens to watch all 
this through, because in its TPC-B-like default mode it has an 
interesting property:  if performance on regular hardware gets too fast, 
it means data integrity must be broken, because regular drives can't do 
physical commits very often.  What Phoronix should be doing is testing 
simple fsync rate using something like sysbench first[1], and if those 
numbers come back higher than disk RPM rate declare the combination 
unusable for PostgreSQL purposes rather than reporting on the fake numbers.


[1] 
http://www.westnet.com/~gsmith/content/postgresql/pg-benchmarking.pdf , 
page 26


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] \copy command: how to define a tab character as the delimiter

2010-03-09 Thread Thomas Kellerer

Hi,

I tried to import a text file using the \copy command in psql using the 
following:

\copy foo (foo, bar) from foobar.txt delimiter as '\t' csv header

but that gives me an error:

ERROR:  COPY delimiter must be a single one-byte character

So how can I specify a tab character if I also need to specify that my file has 
a header line?

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DROP column: documentation unclear

2010-03-09 Thread Harald Fuchs
In article <20100308213549.gb...@svana.org>,
Martijn van Oosterhout  writes:

>> "subsequent ... will store a null value" would imply that deleted columns 
>> will still take some place, while "the space will be reclaimed ..." would 
>> suggest that new rows (insert or updates in mvcc) don't have the deleted 
>> column anymore - I'm not quite sure how to interpret this.  What is pg 
>> doing?

> What you're missing is that in postgres NULLs are stored as a bit in
> the header and there is no data. So in a sense NULLs take no space
> (well, one bit) which means both statements are true.

But if you already have eight nullable columns, the (maybe originally
non-null) column which has been dropped would cause the header to be
one byte larger, wouldn't it?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \copy command: how to define a tab character as the delimiter

2010-03-09 Thread Tom Lane
Thomas Kellerer  writes:
> \copy foo (foo, bar) from foobar.txt delimiter as '\t' csv header

> So how can I specify a tab character if I also need to specify that my file 
> has a header line?

Type an actual tab.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to get a C function built with MSVC

2010-03-09 Thread Bryan Montgomery
Thanks for the suggestion. I wasn't able to get the whole pg to compile -
but I was able to take one of the contrib packages and use that as a
template for my needs.

Bryan.

On Wed, Feb 24, 2010 at 2:00 AM, Magnus Hagander wrote:

> 2010/2/23 Bryan Montgomery :
> > Hello,
> > I have a very simple function that I've created. I can compile it in
> linux without any problems. However, I've tried creating a dll for windows
> and am not having much luck.
> >
> > I first tried MSVC 2008 but after seeing some comments have tried
> compiling it in MSVC 2005 as well.
>
> With 8.4 and earlier, only MSVC2005 is supported. If you're trying on
> 9.0, MSVC 2008 should be fine.
>
>
> > In both I get several errors, whether I compile the code as c++ and use
> an extern C block, or compile the code as C.
>
> They definitely should be C. I wouldn't recommend C++ with extern defines.
>
>
> > I've tried creating new projects from both ides for dlls, as well as
> trying to create a project from an external source file.
> >
> > I'm using postgres version 8.3.5. I'm also including the port\win32_msvc
> directory as well as the port\win32 include directory.
> >
> > I was wondering if someone has a packaged 'solution' (makefile /def /.sln
> whatever :) that they can post / send? At this point I'm stumped with just
> trying to build a basic dll with the postgres headers.
>
>
> I don't know of a packaged one, but your best bet is probably to
> download the source and run the "mkvcbuild" script on it. Then look at
> one of the projects generated for a contrib project - they are the
> most likely to be close to what you want.
>
>
> > warning C4005: '_WIN32_WINNT' : macro redefinitionc:\program
> files\postgresql\include\server\pg_config_os.h7
>
> Are you defining _WIN32_WINNT yourself somewhere? Or including
> something else before the postgres headers that define it?
>
> > error C2894: templates cannot be declared to have 'C' linkage
> c:\program files\microsoft visual studio
> 8\vc\platformsdk\include\wspiapi.h44
> > error C2894: templates cannot be declared to have 'C' linkage
> c:\program files\microsoft visual studio
> 8\vc\platformsdk\include\wspiapi.h44
> > error C2466: cannot allocate an array of constant size 0c:\program
> files\microsoft visual studio 8\vc\include\sys\utime.inl44
>
> These are all coming out of your SDK files. Can you build your
> projects if you *don't* include the postgres headers at all? Perhaps
> you are stuck with one of those platform sdk mismatches we've seen now
> and then?
>
>
> --
>  Magnus Hagander
>  Me: http://www.hagander.net/
>  Work: http://www.redpill-linpro.com/
>


[GENERAL] question on 8.4.2 build postgres C functions

2010-03-09 Thread Steve Coleman
Hi,

I am moving from version 8.1.4 to 8.4.2 and have set up a Ubuntu box configured 
with the 8.4.2 postgres one-click installer.  It works great!

I imported everything from 8.1.4 with only two failures, and those were 
homegrown "C" encryption / decryption functions that we use here.

For 8.1.4 I had compiled the C functions by adding a folder in the contrib 
folder of the source code, and used gmake to build it.

For 8.4.2 I plan on adding the "magic block" and rebuilding the functions.  I 
downloaded the source tarball postgresql-8.4.2.tar.gz and put that in a tmp 
folder, set up my contrib/decrypt folder, and set up the make.

My problem is that 8.4.2 source has no "Makefile.global".  This file is 
essential I believe, and is used by all of the Makefiles in the source tarball.

So am I missing something obvious or is there a different procedure to build 
"C" functions that I'm not aware of.

Thanks!
   STeve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DROP column: documentation unclear

2010-03-09 Thread Tom Lane
Harald Fuchs  writes:
> Martijn van Oosterhout  writes:
>> What you're missing is that in postgres NULLs are stored as a bit in
>> the header and there is no data. So in a sense NULLs take no space
>> (well, one bit) which means both statements are true.

> But if you already have eight nullable columns, the (maybe originally
> non-null) column which has been dropped would cause the header to be
> one byte larger, wouldn't it?

No; the size of the bitmap is equal to the total number of columns, not
the number of columns that are null.  One way to think about it is that
the first null in a particular row costs NUMCOLUMNS bits to store, and
then any additional nulls are free.

Anyway, arguing about the size of the null bitmap seems to me to be
beside the point.  If you care about the space occupied by a column at
all, it's going to be a lot more than one bit.  It's that macroscopic
space usage that the DROP COLUMN documentation is talking about
reclaiming, not whether or not you need a null bitmap.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \copy command: how to define a tab character as the delimiter

2010-03-09 Thread Thomas Kellerer

Tom Lane wrote on 09.03.2010 18:21:

Thomas Kellerer  writes:

\copy foo (foo, bar) from foobar.txt delimiter as '\t' csv header



So how can I specify a tab character if I also need to specify that my file has 
a header line?


Type an actual tab.



Blush

That easy?


Thanks
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question on 8.4.2 build postgres C functions

2010-03-09 Thread Tom Lane
Steve Coleman  writes:
> My problem is that 8.4.2 source has no "Makefile.global".  This file is 
> essential I believe, and is used by all of the Makefiles in the source 
> tarball.

Makefile.global is created by the configure step.  This has not changed
since 8.1.  If you want to create a file that matches somebody else's
build, examine pg_config output to see what configure options they used,
and duplicate that.  It would also be wise to check the
src/include/pg_config.h result from configure to ensure that it matches
their build (hopefully pg_config.h is included in what they distributed).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent help needed- alias name in update statement

2010-03-09 Thread Adrian Klaver
On Tuesday 09 March 2010 5:51:31 am venkatra...@tcs.com wrote:
> Hello,
>
> In postgre, when i am trying to give alias name in update statement like
> below -
>
> -
> update mytable x
> set x.name = 'asdf'
> where x.no = 1
> ---
>
> is giving error - mytable is not having col x.
>
> We have migrated code from oracle to postgre 8.4. Is there any solution
> for this.
> (functions were compiled without any compilation errors - now when we are
> trying to run these we are getting problems)
>
> Please help..
>

From here:
http://www.postgresql.org/docs/8.4/interactive/sql-update.html

"column

The name of a column in table. The column name can be qualified with a 
subfield name or array subscript, if needed. Do not include the table's name in 
the specification of a target column — for example, UPDATE tab SET tab.col = 1 
is invalid. "


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \copy command: how to define a tab character as the delimiter

2010-03-09 Thread Raymond O'Donnell
On 09/03/2010 17:30, Thomas Kellerer wrote:
> Tom Lane wrote on 09.03.2010 18:21:
>> Thomas Kellerer  writes:
>>> \copy foo (foo, bar) from foobar.txt delimiter as '\t' csv header
>>
>>> So how can I specify a tab character if I also need to specify that
>>> my file has a header line?
>>
>> Type an actual tab.
>>
> 
> Blush
> 
> That easy?

This is Postgres you're talking about - of course it's that easy! :-)

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] has_schema_privilege function

2010-03-09 Thread Greg Stark
On Tue, Mar 9, 2010 at 10:28 AM, Jignesh Shah
 wrote:
> Could you tell me is there any other robust way to make sure that user1
> doesn't have CREATE permissions on mydb schema?

It depends what you're worried about. If you're worried that plperl
will begin mapping booleans to perl variables differently or Postgres
will change the text representation then you could alter the SQL to
say something like CASE WHEN has_schema_privilege() THEN 1 ELSE 0 END
or whatever constant you prefer like 'yes' and 'no' or 'ok' and ''.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update view/table rule order of operations or race condition

2010-03-09 Thread Dan Fitzpatrick

I think I found the problem with this. The rule:

CREATE OR REPLACE RULE insert_options AS
 ON INSERT TO options DO INSTEAD
   (INSERT INTO a (a_id, type_id, name)
  VALUES (new.options_id, 6, new.options_name);
UPDATE a_item SET val = new.quantity
  WHERE a_item.a_id = new.options_id AND a_item.type_id = 1;
UPDATE a_item SET val = new.price
  WHERE a_item.a_id = new.options_id AND a_item.type_id = 2;
UPDATE a_item SET val = new.discount
  WHERE a_item.a_id = new.options_id AND a_item.type_id = 3;
);


Is creating a new value from the sequence a_a_id_seq for  
"new.options_id" on each UPDATE call. How do I use the variable  
new.options_id in the three update statements without incrementing the  
sequence again?








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent help needed- alias name in update statement

2010-03-09 Thread Raymond O'Donnell
On 09/03/2010 13:51, venkatra...@tcs.com wrote:
> Hello,
> 
> In postgre, when i am trying to give alias name in update statement like 
> below -
> 
> -
> update mytable x
> set x.name = 'asdf'
> where x.no = 1 
> ---

Leave leave off the "x." :

update mytable
set name = 'asdf'
where no = 1;

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question on 8.4.2 build postgres C functions

2010-03-09 Thread Tom Lane
Tom Lane  writes:
> Steve Coleman  writes:
>> My problem is that 8.4.2 source has no "Makefile.global".  This file is 
>> essential I believe, and is used by all of the Makefiles in the source 
>> tarball.

> Makefile.global is created by the configure step.  This has not changed
> since 8.1.

On second thought, I got too immersed in answering the question you
asked, and failed to answer the uber-question which is whether there's
a better way to do it.  Yes: use pgxs.  Then you don't need a
configured source tree at all, and everything you do need should
come in pre-built installations.

If you started by copying a contrib makefile then you should have a
block like

ifdef USE_PGXS
PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
else
subdir = contrib/auto_explain
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif

You could use this as-is by always typing "make USE_PGXS=1", but
what I'd suggest is dropping the ifdef and the else block so that
pgxs becomes the default.  Contrib modules have this so they can
be built either way, but you don't really need that flexibility
for a private module.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
Would the stats come across in WAL log shipping to a physically separate 
server?  My understanding is that they won't.

Thanks,

Scot Kreienkamp
skre...@la-z-boy.com


-Original Message-
From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
Sent: Tuesday, March 09, 2010 9:39 AM
To: Scot Kreienkamp
Cc: Tom Lane; Scott Mead; pgsql-general@postgresql.org
Subject: Re: autovacuum question

On Tue, Mar 9, 2010 at 1:47 PM, Scot Kreienkamp  wrote:
> I found a way to do it very easily using LVM snapshots and WAL log
> shipping, but the net effect is I'm bringing a new LVM snapshot copy of
> the database out of recovery every 1-2 hours.  That means I'd have to
> spend 15 minutes, or one-quarter of the time, doing an analyze every
> time I refresh the database.  That's fairly painful.  The LVM snap and
> restart only takes 1-2 minutes right now.

Your snapshot should have the same stats that the server does, so this
doesn't actually seem to explain the discrepancy.

You be running into performance problems with LVM if the snapshot is
the one paying the price for all the CoW copies. Or it could be that
doing retail block copies as needed results in them being fragmented
and destroying the sequential scan performance. You might be able to
reduce the difference by making sure to do a vacuum and a checkpoint
immediately prior to the snapshot. That would hopefully achieve
setting most hint bits so that read-only queries on the snapshot don't
cause writes to blocks just to set them.

There might be an option in LVM to materialize the entire snapshot
which might be able to bring the performance up to the same level and
hopefully allocate all the blocks sequentially.


-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update view/table rule order of operations or race condition

2010-03-09 Thread Tom Lane
Dan Fitzpatrick  writes:
> The rule is creating a new value from the sequence a_a_id_seq for  
> "new.options_id" on each UPDATE call. How do I use the variable  
> new.options_id in the three update statements without incrementing the  
> sequence again?

You don't.  This is one of the major disadvantages of rules: they're
macros, not functions, and as such don't have any real local variables.
You should probably look into whether you can do what you want with a
trigger instead of a rule.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scott Marlowe
On Tue, Mar 9, 2010 at 6:47 AM, Scot Kreienkamp  wrote:
> Wish I could Tom.  I need a non-production, read-write copy of the
> database that is updated every 1-2 hours from production. I don't set
> this requirement, the business does. I just have to do it if it's
> technically possible.
>
> I found a way to do it very easily using LVM snapshots and WAL log
> shipping, but the net effect is I'm bringing a new LVM snapshot copy of
> the database out of recovery every 1-2 hours.  That means I'd have to
> spend 15 minutes, or one-quarter of the time, doing an analyze every
> time I refresh the database.  That's fairly painful.  The LVM snap and
> restart only takes 1-2 minutes right now.
>
> If you have any other ideas how I can accomplish or improve this I'm all
> ears.

I'm gonna take a scientific wild-assed guess that the real issue here
is caching, or more specifically, lack thereof when you first start up
your copy of the db.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \copy command: how to define a tab character as the delimiter

2010-03-09 Thread Thomas Kellerer

Raymond O'Donnell wrote on 09.03.2010 18:39:

This is Postgres you're talking about - of course it's that easy! :-)


:)

The main reason I asked, was that the manual actually claims that '\t' can be used 
("The following special backslash sequences are recognized by COPY FROM")

As this is part of the description for the COPY command, does this maybe mean 
it is only valid for COPY but not for \copy?
if that is the case, it should be documented somewhere).

Or is this related to the value of standard_conforming_strings?

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
 

ISTM that 9.0's read-only standby feature may be of use to you.  I know
it doesn't help you *today* but have you looked at it yet?

 

[Scot Kreienkamp] 

I had considered it and it will make my life easier for my reporting
server, but unfortunately in this case I need a read-write copy.  

 

 



Re: [GENERAL] autovacuum question

2010-03-09 Thread Scott Mead
On Tue, Mar 9, 2010 at 9:56 AM, Scott Marlowe wrote:

> On Tue, Mar 9, 2010 at 6:47 AM, Scot Kreienkamp 
> wrote:
> > Wish I could Tom.  I need a non-production, read-write copy of the
> > database that is updated every 1-2 hours from production. I don't set
> > this requirement, the business does. I just have to do it if it's
> > technically possible.
> >
> > I found a way to do it very easily using LVM snapshots and WAL log
> > shipping, but the net effect is I'm bringing a new LVM snapshot copy of
> > the database out of recovery every 1-2 hours.  That means I'd have to
> > spend 15 minutes, or one-quarter of the time, doing an analyze every
> > time I refresh the database.  That's fairly painful.  The LVM snap and
> > restart only takes 1-2 minutes right now.
> >
> > If you have any other ideas how I can accomplish or improve this I'm all
> > ears.
>
> I'm gonna take a scientific wild-assed guess that the real issue here
> is caching, or more specifically, lack thereof when you first start up
> your copy of the db.
>

ISTM that 9.0's read-only standby feature may be of use to you.  I know it
doesn't help you *today* but have you looked at it yet?

--Scott M


>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] managing tablespaces like files?

2010-03-09 Thread Justin Graf
On 3/9/2010 12:07 AM, Sam Carleton wrote:
> I would like to thank both John and Scott for the help.  It is very 
> clear to me that PostgreSQL isn't the ideal solution for my current 
> model.  The conversation has gotten me thinking of ways the model 
> could be modified to work with PostgrSQL (and other client/server 
> RDBM).  Thus I will return to the drawing board just to double check 
> to see if there might be a better model.
>
> Thank you all, this has been truly valuable!
>
> Sam

Hello Sam:

I've had similar conversation with Application developers who are used 
to working with Access databases aka file based databases. When it comes 
time to backup or move the databases to other computer or share the 
database over a file-share they look at databases as just files to be 
copied around from machine to machine.  No database server like oracle, 
postgresql, mssql, mysql, or db2 allow simple copying of the database to 
another location/server having it start right up. None are going to work 
that way.

These databases expressly deny direct access to the files and make the 
assumption their the only process accessing the files.  Another problem 
you may run into and need to think about is Anti-virus apps have a 
tendency to create all kinds of problems with database servers.

The application can work and make the user life even easier. The 
approach has to be different on how the  backup restore and moving the 
databases around is going to work .

Take a look at a couple of examples how others have solved this problem

One company using postgresql, directly aimed at small companies (where 
there is no IT help let money to pay for IT) wrote a very nice and easy 
to use front end for there application to backup, restore and move the 
application to another server.   go to www.3cx.com  and download there 
app they have a free version check out the instruction for back and 
restore. Its very easy clean interface how they backup and restore the 
database.

Another company that has  very nice and easy to use backup and restore 
of a MySQL database is Gordano Messaging Server.  www.gordano.com  their 
backup, recovery and moving to another server is  very simply.  A simple 
little command line tool backups the database, user profiles, and other 
system settings into a nice little file called settings.txt.  They also 
have gui tool that will automate the backing up of the database.





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL Conference East, Hotel Discount Deadline

2010-03-09 Thread Joshua D. Drake
March 9th, 2010

PostgreSQL Conference East, the largest PostgreSQL Conference for Users,
Developers, Decision makers and anyone using PostgreSQL arranged for a
hotel discount for attendees from the Radisson Warwick Hotel (the
location of the conference).

The retail price of a double room is ~ 199.00. The discount rate is
132.00.

If you are attending PostgreSQL Conference East and you would like the
discount you must register by the 11th of March. For more information on
PostgreSQL Conference East accommodations:

http://www.postgresqlconference.org/east/2010/accommodations

For a detailed agenda:

http://www.postgresqlconference.org/2010/east/agenda

For a list of talks:

http://www.postgresqlconference.org/2010/east/talks

To register:

https://www.postgresql.us/purchase

Many thanks to our Premiere and Gold Sponsors:

Command Prompt, Inc: http://www.commandprompt.com/
EnterpriseDB: http://www.enterprisedb.com/
OmniTI: http://www.omniti.com/
OTG: http://www.otg-nc.com/
RedHat: http://www.redhat.com/

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \copy command: how to define a tab character as the delimiter

2010-03-09 Thread Tom Lane
Thomas Kellerer  writes:
> The main reason I asked, was that the manual actually claims that '\t' can be 
> used ("The following special backslash sequences are recognized by COPY FROM")

\t is recognized in the copy data, not in the command's parameters.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update view/table rule order of operations or race condition

2010-03-09 Thread Dan Fitzpatrick

On Mar 9, 2010, at 11:00 AM, Tom Lane wrote:


Dan Fitzpatrick  writes:

The rule is creating a new value from the sequence a_a_id_seq for
"new.options_id" on each UPDATE call. How do I use the variable
new.options_id in the three update statements without incrementing  
the

sequence again?


You don't.  This is one of the major disadvantages of rules: they're
macros, not functions, and as such don't have any real local  
variables.

You should probably look into whether you can do what you want with a
trigger instead of a rule.

regards, tom lane


Can you have a trigger on a view? I thought you can only use rules  
with a view.


As another option, can you pass "NEW" (the incoming data record) as a  
variable to a function or does each field in NEW have to be explicitly  
passed to the function?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \copy command: how to define a tab character as the delimiter

2010-03-09 Thread Adrian Klaver

On 03/09/2010 10:09 AM, Thomas Kellerer wrote:

Raymond O'Donnell wrote on 09.03.2010 18:39:

This is Postgres you're talking about - of course it's that easy! :-)


:)

The main reason I asked, was that the manual actually claims that '\t'
can be used ("The following special backslash sequences are recognized
by COPY FROM")

As this is part of the description for the COPY command, does this maybe
mean it is only valid for COPY but not for \copy?
if that is the case, it should be documented somewhere).

Or is this related to the value of standard_conforming_strings?

Thomas




From here:
http://www.postgresql.org/docs/8.4/interactive/app-psql.html
"The syntax of the command is similar to that of the SQL COPY  command. 
Note that, because of this, special parsing rules apply to the \copy 
command. In particular, the variable substitution rules and backslash 
escapes do not apply. "


--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Not all functions in schema pg_catalog are "visible"

2010-03-09 Thread Alvaro Herrera
Thomas Kellerer wrote:
> Hi,
> 
> I just noticed that that there are functions defined (such as 
> pg_catalog.time(timestamp) that can only be called when prefixed with 
> pg_catalog. However other functions (that are at first glance defined 
> identically to time()) can be called without prefixing them with pg_catalog.

I think it's not that they are visible but rather that their names are
keywords.  So you could also call them by quoting them, e.g.
"time"(timestamp).  IIRC there's a view listing all keywords you can use
to figure this out.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] obsessive-compulsive vacuum behavior

2010-03-09 Thread Ben Chobot
On Mar 8, 2010, at 7:16 AM, Tom Lane wrote:

> After it's done you probably ought to look into your autovacuum
> settings.  You shouldn't have gotten into a state with so many dead
> tuples in the first place, I would think, if autovac were functioning
> properly.

Yeah, it turns out our autovacuums kept getting canceled because this table 
gets a lot of churn. Now that we've turn off the cost delay, things are much 
happier. Thanks!
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgsql-ru-general] Deleting Large Objects

2010-03-09 Thread Serguei Mokhov
Hi John,

pgsql-general@postgresql.org is perhaps a better idea
to post this. You've posted the message to the regional
Russian mailing list where the audience is somewhat of
a narrower scope.

-s

On Tue, Mar 9, 2010 at 10:19 AM, John KEA  wrote:
>  style="font: inherit;">Dear PostgreSQL Creators,
>  
> I am frequently using PostgreSQL server to manage the data, but I am 
> stuck ed now with a problem of large objects deleting, namely it works too 
> slow. E.g., deleting of 900 large objects of 1 Mb size takes around 2.31 
> minutes. This dataset is not largest one which I am working with, e.g. 
> deleting of 3000x1Mb objects takes around half an hour. Could you, please, 
> give me a few advices what do I have to do to improve the deleting time? 
> (i've tried to extend the memory server uses, but results are the same)
>  
> Best regards!

-- 
Serguei Mokhov
http://www.cs.concordia.ca/~mokhov
http://marf.sf.net | http://sf.net/projects/marf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to? Timestamp with timezone.

2010-03-09 Thread Andre Lopes
Hi,

I have a database in a US based Server. I need to get a TIMESTAMP with the
PORTUGAL time. How can I do this?

Best Regards,


Re: [GENERAL] kernel version impact on PostgreSQL performance

2010-03-09 Thread Rodger Donaldson
On Tue, Mar 09, 2010 at 11:58:58AM -0500, Greg Smith wrote:
> Rodger Donaldson wrote:
> >Cyril Scetbon wrote:
> >>Does anyone know what can be the differences between linux kernels
> >>2.6.29 and 2.6.30 that can cause this big difference (TPS x 7 !)
> >>http://www.phoronix.com/scan.php?page=article&item=linux_2624_2633&num=2
> >
> >http://www.csamuel.org/2009/04/11/default-ext3-mode-changing-in-2630
> 
> Yeah, I realized I answered the wrong question--Cyril wanted to know 
> "why was 2.6.30 so much faster?", not "why did 2.6.33 get so much 
> slower?", which is what I was focusing on.  There's a good intro to what 
> happened to speed up 2.6.30 at http://lwn.net/Articles/328363/ , with 
> the short version being "the kernel stopped caring about data integrity 
> at all in 2.6.30 by switching to writeback as its default". 
> 
> The give you an idea how wacky this is, less than a year ago Linus 
> himself was ranting about how terrible that specific implementation 
> was:  http://lkml.org/lkml/2009/3/24/415 
> http://lkml.org/lkml/2009/3/24/460 and making it the default exposes a 
> regression to bad behavior to everyone who upgrades to a newer kernel.

The whole O_PONIES debacle, while possibly gratifying to a small
number of kernel developers, left a pretty foul taste in my mouth, and
th deliberate knobbling of ext3 so it wouldn't be more reliable than
ext4 was... words fail me, really.  I don't know what Linux was
thinking when he accepted that patch.  I'm glad the distributors
pretty much all announced they'd over-ride that behaviour in any
kernels they ship.

Suffice to say none of that mess inspired much confidence in ext4 for
me.

-- 
Rodger Donaldsonrodg...@diaspora.gen.nz
After what Turing did to help win the war, the British government should have
provided him with his own unit of lithe and willing young sailors.
 -- pjdoland

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Discovering Postgres Server from Windows Clients

2010-03-09 Thread imageguy
Howdy;
We have written an application that uses PG as the backend database.
We currently embed the database servers IP address in a text file that
the installed (created using Inno) uses when setting up the client
programs (ie, stores this info in a local .ini file).

Is there a recommended way of "discovery" a PG server on the network ?

or is it really just as simple as testing port 5432 for range of IP
addresses ?

g.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres GIS Question

2010-03-09 Thread akp geek
Hi all -

  we have are using postgis. I am running into this scenarios.
When the query is executed it is displaying information as " 780/24 520
(500- ? ) " as opposed to " 780/24 520 (500- 1/4  )" . It's doing in many
part. Has any one run into this kind of issue? Is it related to the encoding
by any chance? Can you please share your thoughts?

Regards


Re: [GENERAL] Discovering Postgres Server from Windows Clients

2010-03-09 Thread Vick Khera
Lookup "bonjour" support in postgres server docs.

On Tue, Mar 9, 2010 at 4:17 PM, imageguy  wrote:
> Is there a recommended way of "discovery" a PG server on the network ?
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres GIS Question

2010-03-09 Thread Jeff Davis
On Tue, 2010-03-09 at 16:27 -0500, akp geek wrote:
>   we have are using postgis. I am running into this
> scenarios. When the query is executed it is displaying information as
> " 780/24 520 (500- ? ) " as opposed to " 780/24 520 (500- 1/4  )" .
> It's doing in many part. Has any one run into this kind of issue? Is
> it related to the encoding by any chance? Can you please share your
> thoughts?

You haven't really provided enough information to begin. What is the
server_encoding? What is the client_encoding? How are you putting the
data in? What kind of special characters are you using, and what do you
expect the result to be?

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to? Timestamp with timezone.

2010-03-09 Thread Tom Lane
Andre Lopes  writes:
> I have a database in a US based Server. I need to get a TIMESTAMP with the
> PORTUGAL time. How can I do this?

Try AT TIME ZONE, or just temporarily change the timezone setting.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to? Timestamp with timezone.

2010-03-09 Thread Andre Lopes
It is possible to do this in a shared database server?

Best Regards,



On Tue, Mar 9, 2010 at 9:54 PM, Tom Lane  wrote:

> Andre Lopes  writes:
> > I have a database in a US based Server. I need to get a TIMESTAMP with
> the
> > PORTUGAL time. How can I do this?
>
> Try AT TIME ZONE, or just temporarily change the timezone setting.
>
>regards, tom lane
>


Re: [GENERAL] How to? Timestamp with timezone.

2010-03-09 Thread John R Pierce

Andre Lopes wrote:

It is possible to do this in a shared database server?
 


sure, client timezone is a per connection setting.


   SET TIME ZONE 'WET';

or

   SET TIME ZONE 'Europe/Lisbon';



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby possible with 8.1?

2010-03-09 Thread Greg Smith

zhong ming wu wrote:

Is it possible to have a warm standby with 8.1?


No.  You can set that up so that it replays an entire pile of log files 
sitting there when you start the server, which it sounds like you 
haven't managed yet because you're trying to treat it like a 
warm-standby.  But 8.1 isn't capable of applying log files one at a 
time; it applies whatever you've got, and then it's done with recovery 
and transitions to live.  You can't just stop the result and then feed 
it the next file, as you've already discovered through experimentation.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to? Timestamp with timezone.

2010-03-09 Thread Alvaro Herrera
Andre Lopes escribió:
> It is possible to do this in a shared database server?

You can also do

ALTER DATABASE foo SET timezone TO 'someval'
ALTER ROLE bar SET timezone TO 'someval'

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] subscribe

2010-03-09 Thread Andrew Boag


--

===

Andrew Boag
Director
Catalyst IT Australia
Suite 401, 89 York St
Sydney 2000

http://www.catalyst-au.net
andrew.b...@catalyst-au.net
+61 421 528 125


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log_statement and syslog severity

2010-03-09 Thread Bruce Momjian
Greg Sabino Mullane wrote:
> Bruce replied:
> ...
> >> This means that, even using syslog as a destination, it's not possible for
> >> me to filter statements without some sort of log-text parsing, which I'd
> >> prefer to avoid on effort, performance and data-integrity grounds.
> 
> > Our logging system is very flexible, but not work-free on the user end. 
> > I don't see us changing things in that area.
> 
> Bruce, that's a little harsh, I think the original poster has a legitimate 
> request. Personally, I'd love to be able to split the logs on various things, 
> the most important to me being durations and per-database. I looked at the 
> code about a year ago to see how hard this would be and found it non-trivial
> (for me), as we're really assuming hard things go to a single filehandle.
> It's definitely an area for improvement, and should be a TODO if not already.

This issue has been discussed and I think the community conclusion was
that this should not be done by the database but rather by external
tools.  I think I was giving an accurate portrayal of the odds of this
getting added.  I do not think there is enough support for this to be a
TODO item.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql 8.1

2010-03-09 Thread AI Rumman
Please tell me the link to download Postgresql 8.1


Re: [GENERAL] Postgresql 8.1

2010-03-09 Thread Ben Chobot
All links to download postgres are carefully hidden under 
http://www.postgresql.org/download/

Whether or not there is a link there to the particular platform and package 
format you want depends upon which one you're looking for. But a few minutes of 
looking should let you determine the answer to that yourself.

And I assume you already know this, but if you're going to be installing a new 
database, 8.1 is a poor choice when you could be choosing 8.4.

On Mar 9, 2010, at 9:32 PM, AI Rumman wrote:

> Please tell me the link to download Postgresql 8.1


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] error: C preprocessor "/lib/cpp" fails sanity check

2010-03-09 Thread AI Rumman
I am getting the following error during Postgresql 8.1 installation:

error: C preprocessor "/lib/cpp" fails sanity check

Please any suggestion how to solve it.


Re: [GENERAL] error: C preprocessor "/lib/cpp" fails sanity check

2010-03-09 Thread John R Pierce

AI Rumman wrote:

I am getting the following error during Postgresql 8.1 installation:
 
error: C preprocessor "/lib/cpp" fails sanity check
 
Please any suggestion how to solve it.


what platform (os, version, etc) are you building postgres on?

what C++ compiler is that?   thats an unusual path for a C++ compiler, 
/usr/bin/cpp is more typical.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SAS Raid10 vs SATA II Raid10 - many small reads and writes

2010-03-09 Thread Phillip Berry
Hi Everyone,

We're in the market for a new DB server to replace our current one (yes it's 
one of *those* 
questions) ;).

It'll have quad core Xeons, 36GB RAM and some sort of Raid 10 configuration.

Our provider is pushing us towards 6 x SATA II disks in a Raid 10 configuration 
or 4 x SAS disks in 
Raid 10 (budget constraints).

The application that queries this DB opens up about 100 connections and 
performs millions of 
inherently small reads and writes to process the data over the course of a few 
weeks.

My question is though the maximum throughput may be similar between 6 SATA II 
and 4 SAS drives does 
anybody know if the SAS drives will *significantly* outperform the SATA drives 
in the face of many 
millions of small concurrent reads and writes?

My Google-fu is weak today and I can't seem to come up with a definitive answer 
one way or the other, 
does anyone have any experience that they can offer?

Kind Regards
Phil Berry


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error: C preprocessor "/lib/cpp" fails sanity check

2010-03-09 Thread Peter Eisentraut
On ons, 2010-03-10 at 12:04 +0600, AI Rumman wrote:
> I am getting the following error during Postgresql 8.1 installation:
>  
> error: C preprocessor "/lib/cpp" fails sanity check
>  
> Please any suggestion how to solve it.

Check the config.log file.  I would guess it's somewhat likely that you
don't have a full C compiler suite installed at all, or the installation
is botched.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SAS Raid10 vs SATA II Raid10 - many small reads and writes

2010-03-09 Thread Scott Marlowe
On Tue, Mar 9, 2010 at 11:49 PM, Phillip Berry
 wrote:
> Hi Everyone,
>
> We're in the market for a new DB server to replace our current one (yes it's 
> one of *those*
> questions) ;).
>
> It'll have quad core Xeons, 36GB RAM and some sort of Raid 10 configuration.
>
> Our provider is pushing us towards 6 x SATA II disks in a Raid 10 
> configuration or 4 x SAS disks in
> Raid 10 (budget constraints).

Are those your only two options?  No 6 SAS drives?  Are you looking at
7200rpm or 10krpm SATA?  15krpm or 10krpm SAS?  What RAID controller?
Battery backed Cache?  Software RAID?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] managing tablespaces like files?

2010-03-09 Thread Sam Carleton
Justin,

It is an image viewing system for events.  Each event could have a few
hundred images (each one will be between a 3 to 12 megs file) and as many as
one hundred thousand images.  So these are the other files, the other
folders is how all these images are stored and managed.  The database
contains basic customer info and references to the images they picked (path
and image name).  Each event is a unique entity, such that the customers and
their selection of images are unique to the event in which the images where
created.  Because of this, right now I am using SQLite which creates a new
"event" database for each and every event.  Depending on my customer, some
folks simply delete the whole event when it is over, while others back it up
off their dedicated event server machine and load it up on their desktop at
home to finish processing the images.

The images are served up via the Apache web server, so currently there is
only one process access the SQLite process, well, possibly two or three,
depending on how many instances Apache start, but it is limited and there
are a ton more reads then there are writes.

Why I am looking for other options is it time to develop some new .Net
client modules to access the data in new and different ways.  Initially I
was going to have everything access the data through Apache, but now I am
thinking that I can speed up the time to market if I where to move to a
traditional database and let the new .Net clients connect directly to the
database.

>From the prospective of my end users, most who know NOTHING about databases,
keeping a file based database makes a LOT of sense.  From the standpoint of
eventually taking this to also be hosted online, it makes a LOT of sense to
rework the current model to one central DB that can hold multiple events.
 The only question is how do I keep the system user friendly for my
customers that know nothing about computers.  That is the question.  In the
end, my software is all about letting my customers have their cake and eat
it too!

Sam


On Tue, Mar 9, 2010 at 2:15 PM, Justin Graf  wrote:

>  On 3/9/2010 12:45 PM, Sam Carleton wrote:
>
> On Tue, Mar 9, 2010 at 10:06 AM, Justin Graf  wrote:
>
>>
>>
> Justin,
>
>  Thank you for the reply.  My application uses two databases at one time:
>  system db and the "event" db.  The system db is exactly that, used all the
> time, aka system settings.  Each time a user starts the program, the user
> has the option of creating a new event.  The concept is that an event is
> like a MS Word Document, so every time a user creates a new event, a new db
> is created along with a number of other files and folders.
>
>  It is very natural for my customers what to pick up the event and copy
> it/back it up to another drive and/or computer.  What I am doing now is
> running throw the pro's and con's of changing the model such that all the
> different events are stored in one database.  There
> are differently advantages, but there are also drawbacks.  Life as a one man
> MicroISV, what fun! ;)
>
>  Sam Carleton
> http://www.PhotoParata.com
>
> For more comments you will have to explain what the other files and folders
> are doing.
>
> If the database stays small and does not need to manage simultaneous access
> by different users then Postgresql, MSSQL, Firebird, Mysql will just cause
> more headaches than what you gain.
>
> If the files are shared and accessed by more than 3 users at the same time
> then its time to look at RDMS,  or if the the data set is massive.
>
>
> All legitimate Magwerks Corporation quotations are sent in a .PDF file
> attachment with a unique ID number generated by our proprietary quotation
> system. Quotations received via any other form of communication will not be
> honored.
>
> CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain
> legally privileged, confidential or other information proprietary to
> Magwerks Corporation and is intended solely for the use of the individual to
> whom it addresses. If the reader of this e-mail is not the intended
> recipient or authorized agent, the reader is hereby notified that any
> unauthorized viewing, dissemination, distribution or copying of this e-mail
> is strictly prohibited. If you have received this e-mail in error, please
> notify the sender by replying to this message and destroy all occurrences of
> this e-mail immediately.
> Thank you.
>