Re: [GENERAL]silent install

2007-10-12 Thread longlong
2007/10/12, Richard Huxton <[EMAIL PROTECTED]>:
>
> Please remember to cc: the list.


sorry.

The first result of that search is this page:
> http://pgfoundry.org/pipermail/pginstaller-devel/2005-November/000712.html
> If you read the next couple of messages you get to this page:
> http://pgfoundry.org/pipermail/pginstaller-devel/2005-November/000714.html
> In what way is that "no answer"?
>
> Did you try connecting the spelling mistake? What happened?


shame for my carelessness.

i find the answer now ,and it works.

thank you for you patience.


[GENERAL] how to truncate/purge the pg_xlog directory?

2007-10-12 Thread tfinneid
Hi

As far as I understand pg_xlog  is the transaction log i.e. the WAL.

I have a pg 8.2 which has consumed about 113MB in pg_xlog/, while the
cluster is virtually empty. There are a couple of tables in the postgres
schema, but they contain no data at the moment. My disk is filling up and
I want to purge directories that are larger than need to be.
I performed a VACUUM FULL, because I thought that would help, but it did not.
I searched the net and the forums here and found only one post about
tuning the wal parameters to control the pg_xlog directory size, but when
looking at the documentation for that I found nothing that could be used
for that

Also the base/ directory is 268MB big and still not many tables, is ti
possible to purge that directory as well?.

I would appreciate some help on what to do.

regards

thomas



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] how to truncate/purge the pg_xlog directory?

2007-10-12 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Hi

As far as I understand pg_xlog  is the transaction log i.e. the WAL.


Correct. And it should have files called e.g. 0001000500E9 
each 16MB long.



I have a pg 8.2 which has consumed about 113MB in pg_xlog/, while the
cluster is virtually empty.


OK, so that's 7 files?
What is your checkpoint_segments setting in postgresql.conf? If it's 3 
then you can indeed expect a maximum of 7 WAL files. For a low-activity 
system you can always turn the setting down.


> There are a couple of tables in the postgres

schema, but they contain no data at the moment. My disk is filling up and
I want to purge directories that are larger than need to be.
I performed a VACUUM FULL, because I thought that would help, but it did not.
I searched the net and the forums here and found only one post about
tuning the wal parameters to control the pg_xlog directory size, but when
looking at the documentation for that I found nothing that could be used
for that

Also the base/ directory is 268MB big and still not many tables, is ti
possible to purge that directory as well?.


You might find the "pg_xxx_size()" functions useful - they are detailed 
in the functions/system-admin chapter of the manuals.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pg_dump - schema diff compatibility

2007-10-12 Thread Tomi N/A
Looking at the mailing list archive, this is just one in a rather long
line of questions regarding diffing db schema dumps, but I've been
unable to find what I was looking for in any of the prior
conversations. I know of apgdiff (seems to work very nicely) and of
other specialized pg diff tools (as outdated or proprietary as they
may be), but what I'm interested in is just a plain, basic schema dump
with a database object order usable with diff.
I can't find it now, but I'm fairly certain I've read somewhere (in
the release changes of an 8.x pgsql version?) that pg_dump has been
"upgraded" so that it orders database objects fist by their
dependencies and then by name. I thought that would imply that dumping
the database like so
pg_dump -f out.sql -F p -s a_db
would give me an sql script which I could compare versions of with
plain old diff or svn diff or whatever existing diff tool I care to
use.

I guess my question is: is pg_dump supposed to dump the schema in a
diff-compatible, predictable way but it's not working or is pg_dump
only concerned with satisfying db object dependencies?
I would very much like this functionality because it would make pgsql
much better integrated into the work environment we have setup at the
office (using e.g. svn diff would be very nice). Tools like apgdiff
don't help as much: it great that it's command line (can be
automated), it does it job well, but it sitll only tells me e.g. that
a view is different, rather than showing me _how_ it is different or
allowing me to compare object definitions using a generic diff - which
is what I really want.

Sorry for the confusing trail of thought and thanks for any comments,
t.n.a.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] how to truncate/purge the pg_xlog directory?

2007-10-12 Thread tfinneid
> As checkpoints are issued, old WAL files will either be recycled
> (renamed) or deleted. Change the setting, then try (as user postgres)
> issuing a "vacuum full " followed by "SELECT pg_switch_xlog()" that
> should recycle them quicker. That'll only get you down to 3 or 4 files
> though - that's the minimum.

Nothings changed yet, maybe I need to start using the db again before I
something real happens...

>>> You might find the "pg_xxx_size()" functions useful - they are detailed
>>> in the functions/system-admin chapter of the manuals.
>>
>> Can you be a bit more specific? I tried looking for it, but the sections
>> are too big to just browse unknown information.
>
> Try looking again, perhaps?
> Go to the online manuals, go to the section on "functions and operators"
> then to the section on "system administration functions". The first
> ocurrence of the word "size" on that page is a heading for the bit you
> need.

That was not easy to find, your descriptions were not easy to dechiper.
When you said sys-admin chapter I started looking for a functions chapter
in section 3 (i.e. chapters 14-28), but you were talking about chapter
9... which is in the section about the sql language. Quite confusing.

regards

thomas



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Coercing compound types to use generic ROW comparison operators

2007-10-12 Thread Tom Lane
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> AIUI, the biggest problem with the current behavior is that there is
> no way to usefully index composite types, it looks like

> create index bar_idx on bar(f);
> create index bar_idx on bar((f).*);
> create index bar_idx on bar((f).a, (f).b);

The last case works, you just don't have enough parentheses.

regression=# create type mytype as (a int, b float);
CREATE TYPE
regression=# create table foo(f mytype);
CREATE TABLE
regression=# create index fooi on foo(((f).a), ((f).b));
CREATE INDEX

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Query problem

2007-10-12 Thread Rodrigo De León
On 10/12/07, Clemens Schwaighofer <[EMAIL PROTECTED]> wrote:
> I probably need to approach the problem different. So both are read
> "independent" or something like this.

Also, email_a, email_b, etc. indicate that you need to consider
refactoring your schema.

You can find a lot of tutorials on normalization online.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Out of memory with ODBC

2007-10-12 Thread Richard Huxton

Tom Lane wrote:

Richard Huxton <[EMAIL PROTECTED]> writes:
Hmm - I can't think how that could reach 1.9GB in size, especially since 
it has to be something different between a "raw" connection and how ODBC 
is doing things.


Try turning on log_statement to see what's really being executed.
I seem to recall reading that Access thinks it can mangle your SQL
as it sees fit.


Hmm - Mike said it was a pass-through query which should be safe. Also, 
the odbc log should have accurate SQL in it.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] DB errors

2007-10-12 Thread Filip Rembiałkowski
2007/10/12, Akash Garg <[EMAIL PROTECTED]>:
> We had a crash of one of our db systems last night.  After doing a fsck of
> he file system and getting the db backup, we're getting a lot of these
> messages in our logs.  The DB will also occasionally crash now.
>
> Oct 12 07:40:16 postgres[30770]: [3-1] 2007-10-12 07:40:16 PDTERROR:  could
> not access status of transaction 2259991368
> Oct 12 07:40:16 postgres[30770]: [3-2] 2007-10-12 07:40:16 PDTDETAIL:  Could
> not read from file "pg_clog/086B" at offset 73728: Success.

which version are you using?


-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Detailed logging, but only for one user?

2007-10-12 Thread Michael Nolan
Is there any way to enable detailed logging (ie, at the SQL statement level)
but only for one user?
--
Mike Nolan


Re: [GENERAL] Out of memory with ODBC

2007-10-12 Thread Relyea, Mike
> From: Richard Huxton [mailto:[EMAIL PROTECTED] 
> > MessageContext: 1946198040 total in 258 blocks; 26624 free (43 
> > chunks);
> > 1946171416 used
> 
> Well, I don't have to be a developer to know that if there's 
> a memory problem it's that big number starting 1946... that's 
> the problem. If that's bytes, it's ~ 1.9GB
> 
> Do you see a backend process growing to 2GB+ before failure?

I'm running PG 8.2.4 on WinXP.  I used the task manager (as a quick and
dirty utility) to watch the memory usage of the backend and it seemed to
peak around 1.2GB.  My server only has 1.5GB installed memory, so that
would seem to verify that the process does indeed run out of memory.

> 
> A quick rummage through the source and I find this file, 
> backend/utils/mmgr/README containing:
> 
> MessageContext --- this context holds the current command 
> message from the frontend, as well as any derived storage 
> that need only live as long as the current message (for 
> example, in simple-Query mode the parse and plan trees can 
> live here).  This context will be reset, and any children 
> deleted, at the top of each cycle of the outer loop of PostgresMain. 
> This is kept separate from per-transaction and per-portal 
> contexts because a query string might need to live either a 
> longer or shorter time than any single transaction or portal.
> 
> Hmm - I can't think how that could reach 1.9GB in size, 
> especially since it has to be something different between a 
> "raw" connection and how ODBC is doing things.
> 
> Can you reproduce this immediately (connect, query, crash), 
> or does the system have to run for a while first?

I rebooted my server (thankfully I don't have very many clients at all,
so that helps) and before anybody else connected to it, ran the query
and observed the same result.

This seems to be a problem with the ODBC driver?  How can I narrow that
down further?

Mike

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Out of memory with ODBC

2007-10-12 Thread Richard Huxton

Relyea, Mike wrote:
From: Richard Huxton [mailto:[EMAIL PROTECTED] 


Do you see a backend process growing to 2GB+ before failure?


I'm running PG 8.2.4 on WinXP.  I used the task manager (as a quick and
dirty utility) to watch the memory usage of the backend and it seemed to
peak around 1.2GB.  My server only has 1.5GB installed memory, so that
would seem to verify that the process does indeed run out of memory.


Yep

Can you reproduce this immediately (connect, query, crash), 
or does the system have to run for a while first?


I rebooted my server (thankfully I don't have very many clients at all,
so that helps) and before anybody else connected to it, ran the query
and observed the same result.

This seems to be a problem with the ODBC driver?  How can I narrow that
down further?


Well, first make 100% certain the query being executed is the one you 
see being sent from Access.


If so, the next obvious thing would be to set up an empty database with 
the same structure as your live one and try the query on that. If *that* 
crashes too then you can rule out any data processing problems.


It will also let you turn statement logging on in that database (ALTER 
DATABASE SET ...) and capture everything the ODBC driver sends. There 
might be something that leaps out at you. Take all of those statements 
and put them into a text-file and run them using psql -f . That 
should cause the same crash.


If it does, it means you have a self-contained test-case that someone 
else can look at for you. We can also test it on Linux/BSD etc.


Then, if you still can't see where the problem is, replace the ODBC 
driver with a different (in your case older) version and see if you 
still get the problem. Might be worth going back a few versions too, to 
see if this is something introduced recently.


http://www.postgresql.org/ftp/odbc/versions/msi/

Another option would be to try the odbc-ng project from Command Prompt 
and see if that does the same thing. I believe that's a completely 
separate code-base.


https://projects.commandprompt.com/public/odbcng/wiki/Downloads


Remember, *something* in the sequence of commands that get executed from 
Access must be different than when you execute them through pgAdmin.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] DB errors

2007-10-12 Thread Alan Hodgson
On Friday 12 October 2007, "Akash Garg" <[EMAIL PROTECTED]> wrote:
> We had a crash of one of our db systems last night.  After doing a fsck
> of he file system and getting the db backup, we're getting a lot of these
> messages in our logs.  The DB will also occasionally crash now.
>
> Oct 12 07:40:16 postgres[30770]: [3-1] 2007-10-12 07:40:16 PDTERROR: 
> could not access status of transaction 2259991368
> Oct 12 07:40:16 postgres[30770]: [3-2] 2007-10-12 07:40:16 PDTDETAIL: 
> Could not read from file "pg_clog/086B" at offset 73728: Success.
>
> Any ideas?
>

Restore from backup. And find out why your server doesn't respect fsync.

-- 
The global consumer economy can best be described as the most efficient way 
to convert natural resources into waste.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] DB errors

2007-10-12 Thread Akash Garg
8.2

On 10/12/07, Filip Rembiałkowski <[EMAIL PROTECTED]> wrote:
>
> 2007/10/12, Akash Garg <[EMAIL PROTECTED]>:
> > We had a crash of one of our db systems last night.  After doing a fsck
> of
> > he file system and getting the db backup, we're getting a lot of these
> > messages in our logs.  The DB will also occasionally crash now.
> >
> > Oct 12 07:40:16 postgres[30770]: [3-1] 2007-10-12 07:40:16
> PDTERROR:  could
> > not access status of transaction 2259991368
> > Oct 12 07:40:16 postgres[30770]: [3-2] 2007-10-12 07:40:16
> PDTDETAIL:  Could
> > not read from file "pg_clog/086B" at offset 73728: Success.
>
> which version are you using?
>
>
> --
> Filip Rembiałkowski
>


[GENERAL] max_*, shared_buffers, and shared memory

2007-10-12 Thread Erik Jones

Hey, everyone, a quick shared memory.  So,

max_connections are ~400b per connection
max_prepared_transactions are ~600b per prepared transaction
for lock management the cost is in table slots in units of ~220b  per  
table slot with total table slots  = (max_locks_per_transaction +  
(max_connections + max_prepared_transactions))


So, of these settings, which are accounted for as part of  
shared_buffers and which result in a system shared memory request on  
top of the shared_buffers setting at server start?  I ask because I  
thought that they were all "on top of" shared_buffers" but the  
comment in the config for max_connections mentions raising  
shared_buffers to accommodate more.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Out of memory with ODBC

2007-10-12 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> Hmm - I can't think how that could reach 1.9GB in size, especially since 
> it has to be something different between a "raw" connection and how ODBC 
> is doing things.

Try turning on log_statement to see what's really being executed.
I seem to recall reading that Access thinks it can mangle your SQL
as it sees fit.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Need advice on keeping backup database up to date

2007-10-12 Thread Richard Huxton

Matthew Wilson wrote:

I am trying to figure out how I can keep the postgresql database in the
backup data center as up to date as possible.

The ideal solution would keep the backup database updated in real time
as the primary database changes.


What you are after is called "replication". There are a couple of paid 
solutions out there and a few free options.


The "default" open-source replication system for PG is called Slony and 
is trigger-based.

  http://www.slony.info/

Skype do their own replication, uses Python and I don't know much more 
about it than that:

  http://pgfoundry.org/projects/skytools/

If you don't need to be quite so up-to-date and don't mind replicating 
an entire cluster look at WAL archiving. See Ch 23 of the manuals (and 
some other chapters). Note that this is only viable if both servers are 
identical (so, both 32-bit Intel running Debian for example).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Out of memory with ODBC

2007-10-12 Thread Richard Huxton

Relyea, Mike wrote:
From: Richard Huxton [mailto:[EMAIL PROTECTED] 


Relyea, Mike wrote:

If I execute this query in PGAdmin III it runs without any errors and

returns no records.

Hmm. So what's different about the queries?


Nothing.  The SQL is identical.  I copied out of the log file and pasted
into PGAdmin.


Must be context then.


[145.188]ERROR from backend during send_query: 'SERROR'
[145.188]ERROR from backend during send_query: 'C53200'
[145.188]ERROR from backend during send_query: 'Mout of memory'
[145.188]ERROR from backend during send_query: 'DFailed on 
request of size 16.'

[145.188]ERROR from backend during send_query: 'Faset.c'
[145.188]ERROR from backend during send_query: 'L712'
[145.188]ERROR from backend during send_query: 'RAllocSetAlloc'

OK, so this seems to be a server-side error, which means 
something should be in the server logs. Is there anything?


I've pasted below what I found immediately before the error.


Thanks

Oh, and I'd probably split that query into about a dozen 
smaller ones - one per statement.


What do you mean one per statement?  One per transaction?


Send one query for each sql statement. That way if you get an error you 
know which failed without having to work through the SQL.



TopMemoryContext: 475184 total in 11 blocks; 12016 free (27 chunks);
463168 used
Local Buffer Lookup Table: 8192 total in 1 blocks; 1776 free (0 chunks);
6416 used
TopTransactionContext: 122880 total in 4 blocks; 14064 free (5 chunks);
108816 used
Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks);
6392 used
MessageContext: 1946198040 total in 258 blocks; 26624 free (43 chunks);
1946171416 used


Well, I don't have to be a developer to know that if there's a memory 
problem it's that big number starting 1946... that's the problem. If 
that's bytes, it's ~ 1.9GB


Do you see a backend process growing to 2GB+ before failure?

A quick rummage through the source and I find this file, 
backend/utils/mmgr/README containing:


MessageContext --- this context holds the current command message from 
the frontend, as well as any derived storage that need only live as long 
as the current message (for example, in simple-Query mode the parse and 
plan trees can live here).  This context will be reset, and any children
deleted, at the top of each cycle of the outer loop of PostgresMain. 
This is kept separate from per-transaction and per-portal contexts 
because a query string might need to live either a longer or shorter 
time than any single transaction or portal.


Hmm - I can't think how that could reach 1.9GB in size, especially since 
it has to be something different between a "raw" connection and how ODBC 
is doing things.


Can you reproduce this immediately (connect, query, crash), or does the 
system have to run for a while first?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Missing files under pg_data following power failure

2007-10-12 Thread Brad Nicholson
On Fri, 2007-10-12 at 10:54 -0500, Scott Marlowe wrote:
> On 10/12/07, Jack Orenstein <[EMAIL PROTECTED]> wrote:
> > Our testing involves cutting power to machines running postgres 7.4,
> > while under load (lots of reads and writes). When we do this, we often
> > lose some files under pg_data storing table content. I.e., the file
> > named for a table's pg_class.oid value is simply gone. This can affect
> > many tables following a power outage. We know this problem has
> > occurred when we are unable to access a table, e.g.
> >
> >  ERROR: relation "xxx" does not exist
> >
> > The table is present in the system tables, but the file storing table
> > content under pg_data is missing.
> >
> > Can anyone provide insight on this phenomenon? Why are these files
> > lost? Are they really lost, or have they simply moved somewhere? What
> > happens to the disk blocks formerly occupied by the files?
> >
> > Getting back in service following this file loss is not a problem; I'm
> > just trying to understand how postgres gets into this state.
> 
> First of all, this should not happen on a machine with proper
> fsyncing.  The possible causes are generally either fsync is off in
> postgresql.conf or the drive array <--> OS layer is lying about fsync
> operations.

What filesystem are you using?  I've seen similar problems on JFS2, it
was something with logredo or fsck being busted on a Big Blue OS.

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] not work in IE

2007-10-12 Thread SCassidy
Try checking the web server log.  For example, if using apache, check 
/etc/httpd/logs/error_log,  or the location your logs directory is 
installed.  Very often web-based applications send STDERR to the web 
server log.

Susan Cassidy




manju arumugam <[EMAIL PROTECTED]> 
Sent by: [EMAIL PROTECTED]
10/11/2007 09:09 PM

To
pgsql-general@postgresql.org
cc

Subject
[GENERAL] not work in IE






Hi,


In my site when am update the edit user info page its
prperly works in firefox .But not works in IE...But
the updation takes place in the database but the page
is not displayed...Wats the reason?


  Now you can chat without downloading messenger. Go to 
http://in.messenger.yahoo.com/webmessengerpromo.php

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




Tiered Data Protection Made Simple
http://www.overlandstorage.com/




Re: [GENERAL] Coercing compound types to use generic ROW comparison operators

2007-10-12 Thread Merlin Moncure
On 10/12/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > AIUI, the biggest problem with the current behavior is that there is
> > no way to usefully index composite types, it looks like
>
> > create index bar_idx on bar(f);
> > create index bar_idx on bar((f).*);
> > create index bar_idx on bar((f).a, (f).b);
>
> The last case works, you just don't have enough parentheses.
>
> regression=# create type mytype as (a int, b float);
> CREATE TYPE
> regression=# create table foo(f mytype);
> CREATE TABLE
> regression=# create index fooi on foo(((f).a), ((f).b));
> CREATE INDEX

wow, thats pretty neat! (although:
create index fooi on foo(((f).a));
feels awfully weird).

for the record, creating indexes this way works fully with row
comparison strategies (in 8.2+):
select * from foo where ((f).a, (f).b) > (5, 0.6) order by (f).a, (f).b limit 1;
will use the 'fooi' index above.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Missing files under pg_data following power failure

2007-10-12 Thread Martijn van Oosterhout
On Fri, Oct 12, 2007 at 11:28:04AM -0400, Jack Orenstein wrote:
> Our testing involves cutting power to machines running postgres 7.4,
> while under load (lots of reads and writes). When we do this, we often
> lose some files under pg_data storing table content. I.e., the file
> named for a table's pg_class.oid value is simply gone. This can affect
> many tables following a power outage. We know this problem has
> occurred when we are unable to access a table, e.g.

Really, the files should be under management of the operating system.
What OS are you using and is the filesystems you're using crash safe?

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] not work in IE

2007-10-12 Thread Martin Gainty
if you give us the URL we can try from here

M--
- Original Message - 
From: "Scott Marlowe" <[EMAIL PROTECTED]>
To: "manju arumugam" <[EMAIL PROTECTED]>
Cc: 
Sent: Friday, October 12, 2007 10:40 AM
Subject: Re: [GENERAL] not work in IE


> On 10/11/07, manju arumugam <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> >
> > In my site when am update the edit user info page its
> > prperly works in firefox .But not works in IE...But
> > the updation takes place in the database but the page
> > is not displayed...Wats the reason?
> 
> You should go to a web site dedicated to the language that your web
> app is written in.
> 
> If you're writing in php, I'd suggest phpbuilder.com
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
> 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] DB errors

2007-10-12 Thread Akash Garg
We had a crash of one of our db systems last night.  After doing a fsck of
he file system and getting the db backup, we're getting a lot of these
messages in our logs.  The DB will also occasionally crash now.

Oct 12 07:40:16 postgres[30770]: [3-1] 2007-10-12 07:40:16 PDTERROR:  could
not access status of transaction 2259991368
Oct 12 07:40:16 postgres[30770]: [3-2] 2007-10-12 07:40:16 PDTDETAIL:  Could
not read from file "pg_clog/086B" at offset 73728: Success.

Any ideas?

Thanks,
Akash


Re: [GENERAL] not work in IE

2007-10-12 Thread Scott Marlowe
On 10/11/07, manju arumugam <[EMAIL PROTECTED]> wrote:
> Hi,
>
>
> In my site when am update the edit user info page its
> prperly works in firefox .But not works in IE...But
> the updation takes place in the database but the page
> is not displayed...Wats the reason?

You should go to a web site dedicated to the language that your web
app is written in.

If you're writing in php, I'd suggest phpbuilder.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] different date-time in base and in system

2007-10-12 Thread Richard Huxton

Alexander Kuprijanov wrote:


$ date; echo 'select CURRENT_TIME;' | psql MyBase
Fri Oct 12 14:51:10 MSD 2007
   timetz

 10:51:11.031388+00


I'm currently at a +1 hour offset from UTC, so.

=> set time zone 'UTC';
SET
=> SELECT current_timestamp;
  now
---
 2007-10-12 14:33:28.258005+00
(1 row)

=> reset time zone;
RESET
=> SELECT current_timestamp;
 now
--
 2007-10-12 15:33:34.17609+01
(1 row)

> Can you please help, what I must do to correct it
>
> As I understand this is tomezone issue...

See the manuals - the A-Z index has lots of entries for time-zones 
including Ch 8.5.3, 43.48, 43.49


In particular this may be useful:

=> SELECT * FROM pg_timezone_names WHERE abbrev='MSD';
 name  | abbrev | utc_offset | is_dst
---+++
 Europe/Moscow | MSD| 04:00:00   | t
 W-SU  | MSD| 04:00:00   | t
(2 rows)

=> set timezone = 'Europe/Moscow';
SET
=> SELECT current_timestamp;
  now
---
 2007-10-12 18:37:54.774167+04
(1 row)

HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] different date-time in base and in system

2007-10-12 Thread Alexander Kuprijanov
Hello

I see this problem:

$ date; echo 'select CURRENT_TIME;' | psql MyBase
Fri Oct 12 14:51:10 MSD 2007
   timetz

 10:51:11.031388+00
(1 row)



Can you please help, what I must do to correct it

As I understand this is tomezone issue...

Where I can read about my issue


Thanks

-- 
Sidi kiel muso sub balailo

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] how to truncate/purge the pg_xlog directory?

2007-10-12 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

As checkpoints are issued, old WAL files will either be recycled
(renamed) or deleted. Change the setting, then try (as user postgres)
issuing a "vacuum full " followed by "SELECT pg_switch_xlog()" that
should recycle them quicker. That'll only get you down to 3 or 4 files
though - that's the minimum.


Nothings changed yet, maybe I need to start using the db again before I
something real happens...


The timestamps on the WAL files should be updated if they get recycled. 
Check those.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Need advice on keeping backup database up to date

2007-10-12 Thread Bill Moran
In response to Matthew Wilson <[EMAIL PROTECTED]>:

> 
> I have been using postgresql for my web application for a while now and
> it has been great.
> 
> I want to set up a separate emergency failover server and database in a
> different data center.
> 
> In the event that my primary data center becomes inaccessible, I want to
> update a DNS record and then redirect visitors to the backup data
> center.
> 
> I am trying to figure out how I can keep the postgresql database in the
> backup data center as up to date as possible.
> 
> The ideal solution would keep the backup database updated in real time
> as the primary database changes.
> 
> I need advice about the best way to accomplish this.

This sounds like a textbook case for Slony.

Slony will not guarantee that your database is up to the second copy,
but that's part of the beauty of it.  The _only_ way you can guarantee
that two databases in different datacenters are perfectly synchronized
at all times is not to let an application move forward until it has
received confirmation from both databases that a transaction has
completed -- and doing that will absolutely kill performance.

Slony will make a best effort.  If traffic is low, it will keep the
two withing a few fractions of a second of each other.  If traffic
gets busy, the backup will get behind, but when things slow down
again, Slony will get them caught up.  As long as your average
database traffic does not exceed the available bandwidth, all will
be well.

Slony will also allow you pick/choose which tables you want to
duplicate.  This can optimize things, as it's not normally worthwhile
to replicate things like session tables, and they usually eat up a
lot of bandwidth.

http://www.slony.info

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Need advice on keeping backup database up to date

2007-10-12 Thread Matthew Wilson

I have been using postgresql for my web application for a while now and
it has been great.

I want to set up a separate emergency failover server and database in a
different data center.

In the event that my primary data center becomes inaccessible, I want to
update a DNS record and then redirect visitors to the backup data
center.

I am trying to figure out how I can keep the postgresql database in the
backup data center as up to date as possible.

The ideal solution would keep the backup database updated in real time
as the primary database changes.

I need advice about the best way to accomplish this.

Thanks in advance.


Matt


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ORDER BY - problem with NULL values

2007-10-12 Thread Tommy Gildseth

Stefan Schwarzer wrote:

Hi there,

if I order a given year in DESCending ORDER, so that the highest 
values (of a given variable) for the countries are displayed at the 
top of the list, then actually the NULL values appear as first. Only 
below, I find the values ordered correctly.


Is there any way to 


   a) make the countries with NULL values appear at the bottom of the list


... ORDER BY y_2000 IS NULL [DESC], y_2000;

   b) neglect the NULL values by still allowing the countries to be 
displayed


Not quite sure what you mean by this.


--
Tommy Gildseth


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] how to truncate/purge the pg_xlog directory?

2007-10-12 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:

OK, so that's 7 files?
What is your checkpoint_segments setting in postgresql.conf? If it's 3
then you can indeed expect a maximum of 7 WAL files. For a low-activity
system you can always turn the setting down.


The setting was not set, so I expect that it used the default value of 3.
I tried setting it to 1, and restarted the server, but nothing happened
with the directories. Is there something that triggers the cleaning of it?

Another thing, I dont understand how a setting of 3 leads to 7 files since
one segment is 16MB, 3 segments should be be 3 files i.e. 48MB not 113MB.
Am I misunderstanding something?


This parameter actually controls when a checkpoint is forced. 
Indirectly, that affects the number of WAL files kept around. It's 
actually (2*checkpoint_segments + 1). For those that are interested it's

"XLOGfileslop" in backend/access/transam/xlog.c.

As checkpoints are issued, old WAL files will either be recycled 
(renamed) or deleted. Change the setting, then try (as user postgres) 
issuing a "vacuum full " followed by "SELECT pg_switch_xlog()" that 
should recycle them quicker. That'll only get you down to 3 or 4 files 
though - that's the minimum.



Also the base/ directory is 268MB big and still not many tables, is ti
possible to purge that directory as well?.

You might find the "pg_xxx_size()" functions useful - they are detailed
in the functions/system-admin chapter of the manuals.


Can you be a bit more specific? I tried looking for it, but the sections
are too big to just browse unknown information.


Try looking again, perhaps?
Go to the online manuals, go to the section on "functions and operators" 
then to the section on "system administration functions". The first 
ocurrence of the word "size" on that page is a heading for the bit you need.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] how to truncate/purge the pg_xlog directory?

2007-10-12 Thread tfinneid
> [EMAIL PROTECTED] wrote:
>
> OK, so that's 7 files?
> What is your checkpoint_segments setting in postgresql.conf? If it's 3
> then you can indeed expect a maximum of 7 WAL files. For a low-activity
> system you can always turn the setting down.

The setting was not set, so I expect that it used the default value of 3.
I tried setting it to 1, and restarted the server, but nothing happened
with the directories. Is there something that triggers the cleaning of it?

Another thing, I dont understand how a setting of 3 leads to 7 files since
one segment is 16MB, 3 segments should be be 3 files i.e. 48MB not 113MB.
Am I misunderstanding something?

>> Also the base/ directory is 268MB big and still not many tables, is ti
>> possible to purge that directory as well?.
>
> You might find the "pg_xxx_size()" functions useful - they are detailed
> in the functions/system-admin chapter of the manuals.

Can you be a bit more specific? I tried looking for it, but the sections
are too big to just browse unknown information.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] [OT] xkcd - A webcomic of romance, sarcasm, math, and language

2007-10-12 Thread John D. Burger

DB-related humor:

  http://xkcd.com/327/

- John D. Burger
  MITRE

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] replicating to a stopped server

2007-10-12 Thread Richard Huxton

Joao Miguel Ferreira wrote:

Hello,

I have a 'strange' situation:

I need to make a replica copy of my database to a reduntant

spare
computer.

The reduntant computer is not running postgres, but postgres is

installed. The redundant computer is running in a special
run-level (I'm
talking Linux here) in which Pg is _not_ running.

When the primary computer crashes the redundant one will be

rebooted in
'normal' mode and Postgres must be started with the databases
from the
replica.


You could use WAL archiving, but you'll want a fairly regular 
full-backup of PG's files otherwise the recovery could take a long time. 
See Ch 23 of the manuals for details.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Missing files under pg_data following power failure

2007-10-12 Thread Scott Marlowe
On 10/12/07, Jack Orenstein <[EMAIL PROTECTED]> wrote:
> Our testing involves cutting power to machines running postgres 7.4,
> while under load (lots of reads and writes). When we do this, we often
> lose some files under pg_data storing table content. I.e., the file
> named for a table's pg_class.oid value is simply gone. This can affect
> many tables following a power outage. We know this problem has
> occurred when we are unable to access a table, e.g.
>
>  ERROR: relation "xxx" does not exist
>
> The table is present in the system tables, but the file storing table
> content under pg_data is missing.
>
> Can anyone provide insight on this phenomenon? Why are these files
> lost? Are they really lost, or have they simply moved somewhere? What
> happens to the disk blocks formerly occupied by the files?
>
> Getting back in service following this file loss is not a problem; I'm
> just trying to understand how postgres gets into this state.

First of all, this should not happen on a machine with proper
fsyncing.  The possible causes are generally either fsync is off in
postgresql.conf or the drive array <--> OS layer is lying about fsync
operations.

The most common hardware cause is IDE / SATA drives / controllers that
do not enforce fsync, but instead fsync when the data is written to
drive / controller buffer memory and continue on.  On IDE / SATA
drives you can often fix this by turning off the cache.

The best way to work with this is to get a known reliable battery
backed caching RAID controller which will make the system fast as well
as reliable.  LSI, Areca, and Escalade are all known to make reliable
controllers.  Late model Adaptecs have gotten some good reports also,
but their earlier controllers were terrible.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Missing files under pg_data following power failure

2007-10-12 Thread Jack Orenstein

Our testing involves cutting power to machines running postgres 7.4,
while under load (lots of reads and writes). When we do this, we often
lose some files under pg_data storing table content. I.e., the file
named for a table's pg_class.oid value is simply gone. This can affect
many tables following a power outage. We know this problem has
occurred when we are unable to access a table, e.g.

ERROR: relation "xxx" does not exist

The table is present in the system tables, but the file storing table
content under pg_data is missing.

Can anyone provide insight on this phenomenon? Why are these files
lost? Are they really lost, or have they simply moved somewhere? What
happens to the disk blocks formerly occupied by the files?

Getting back in service following this file loss is not a problem; I'm
just trying to understand how postgres gets into this state.

Jack Orenstein



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] ORDER BY - problem with NULL values

2007-10-12 Thread Nico Sabbi

Stefan Schwarzer ha scritto:

From 8.3 beta release notes:
- ORDER BY ... NULLS FIRST/LAST

I think this is what you want right?


Yes, indeed. Sounds great. unfortunately I am on 8.1. And wouldn't 
really want to migrate to 8.3 and beta for the moment




order by 1 ?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Query problem

2007-10-12 Thread Pavel Stehule
2007/10/12, Clemens Schwaighofer <[EMAIL PROTECTED]>:
> hi,
>
> thats the first time I am a bit confused by a query not working.
>
> I have this table:
>
> gullevek=# \d test
>  Table "public.test"
>  Column  |   Type|   Modifiers
> -+---+
>  test_id | integer   | not null default
> nextval('test_test_id_seq'::regclass)
>  email_a | character varying |
>  email_b | character varying |
> Indexes:
> "test_pkey" PRIMARY KEY, btree (test_id)
>
> with this content:
>
> gullevek=# select * from test;
>  test_id |email_a|   email_b
> -+---+-
>2 | [EMAIL PROTECTED] | [EMAIL PROTECTED]
>1 | [EMAIL PROTECTED]   |
> (2 rows)
>
> if I do this select:
>
> select * from (select test_id, email_a FROM test WHERE email_a =
> '[EMAIL PROTECTED]') as s, (select test_id, email_b from test where email_b =
> '[EMAIL PROTECTED]') as t;
>

you get 0 rows. [EMAIL PROTECTED] isn't anywhere and [EMAIL PROTECTED] cannot do
pair with any. else 0 * 1 = 0

Pavel

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Out of memory with ODBC

2007-10-12 Thread Richard Huxton

Relyea, Mike wrote:

I tried posting this to the OBC list but didn't get any responses, so
I'm trying my luck here.
 
I'm receiving an out of memory error when trying to execute a long, 
complex query involving many temp tables.  I'm using psqlODBC ver 
8.02.0500 and PG 8.2.4 on WinXP.  I'm trying to execute this query via
a pass-through query in MS Access 2003.  If I execute this query in 
PGAdmin III it runs without any errors and returns no records.


Hmm. So what's different about the queries?


My connection string in Access is:
ODBC; DRIVER={PostgreSQL Unicode}; SERVER=myServer; DATABASE=iqdb; 
PORT=5432; UID=myUserName; PWD=xxx; B9=0; B7=1; B3=1; C8=1


I've attached the log file which contains the SQL I'm trying to 
execute and the error details.  I've tried setting UseDeclareFetch = 1



but it didn't resolve the error.  Any suggestions?


[145.188]ERROR from backend during send_query: 'SERROR'
[145.188]ERROR from backend during send_query: 'C53200'
[145.188]ERROR from backend during send_query: 'Mout of memory'
[145.188]ERROR from backend during send_query: 'DFailed on request of 
size 16.'

[145.188]ERROR from backend during send_query: 'Faset.c'
[145.188]ERROR from backend during send_query: 'L712'
[145.188]ERROR from backend during send_query: 'RAllocSetAlloc'

OK, so this seems to be a server-side error, which means something 
should be in the server logs. Is there anything?


Oh, and I'd probably split that query into about a dozen smaller ones - 
one per statement.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Query problem

2007-10-12 Thread Clemens Schwaighofer
hi,

thats the first time I am a bit confused by a query not working.

I have this table:

gullevek=# \d test
 Table "public.test"
 Column  |   Type|   Modifiers
-+---+
 test_id | integer   | not null default
nextval('test_test_id_seq'::regclass)
 email_a | character varying |
 email_b | character varying |
Indexes:
"test_pkey" PRIMARY KEY, btree (test_id)

with this content:

gullevek=# select * from test;
 test_id |email_a|   email_b
-+---+-
   2 | [EMAIL PROTECTED] | [EMAIL PROTECTED]
   1 | [EMAIL PROTECTED]   |
(2 rows)

if I do this select:

select * from (select test_id, email_a FROM test WHERE email_a =
'[EMAIL PROTECTED]') as s, (select test_id, email_b from test where email_b =
'[EMAIL PROTECTED]') as t;

I get no rows back. should it return something? subquery a doesn't
return one, because the email does not exist, b returns something.

how do I have to make the query so I get a result back and in one row?

-- 
[ Clemens Schwaighofer  -=:~ ]
[ TEQUILA\ Japan IT Group]
[6-17-2 Ginza Chuo-ku, Tokyo 104-8167, JAPAN ]
[ Tel: +81-(0)3-3545-7703Fax: +81-(0)3-3545-7343 ]
[ http://www.tequila.co.jp   ]


signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] replicating to a stopped server

2007-10-12 Thread Erik Jones

On Oct 12, 2007, at 1:59 PM, Richard Huxton wrote:


Joao Miguel Ferreira wrote:

Hello,
I have a 'strange' situation:
I need to make a replica copy of my database to a  
reduntant

spare
computer.
The reduntant computer is not running postgres,  
but postgres is

installed. The redundant computer is running in a special
run-level (I'm
talking Linux here) in which Pg is _not_ running.
When the primary computer crashes the redundant  
one will be

rebooted in
'normal' mode and Postgres must be started with the databases
from the
replica.


You could use WAL archiving, but you'll want a fairly regular full- 
backup of PG's files otherwise the recovery could take a long time.  
See Ch 23 of the manuals for details.


Are you restricted to keep that second server in that special run- 
level?  If not, I'd consider using pg_standby with WAL archiving to  
keep your failover server at most a handful of minutes behind.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] disjoint union types

2007-10-12 Thread Sam Mason
On Wed, Oct 10, 2007 at 12:10:10PM -0500, Erik Jones wrote:
> On Oct 10, 2007, at 11:42 AM, Sam Mason wrote:
>> On Wed, Oct 10, 2007 at 05:02:36PM +0100, Ian Barber wrote:
>>> CREATE TABLE shapes (
>>>  shape_id serial PRIMARY KEY,
>>>  area real not null
>>> );
>>>
>>> CREATE TABLE circle (
>>>  radius real not null
>>> ) INHERITS (shapes);
>>
>> Postgres implements inheritance in a strange way (the way it is at the
>> moment makes sense from an implementation, but not users', point of
>> view), you can end up with a circle and square both with shape_id=1 if I
>> don't take a lot of care.
>
> It doesn't take much care at all to avoid that:  don't use SERIAL for the 
> primary key of the parent.  Instead use an explicity "id integer NOT NULL 
> DEFAULT nextval('some_seq'), that way all of the child tables will use the 
> same sequence as the parent.  That being said, I'm not convinced that table 
> inheritance is what's needed here.  I'll wait until you post the example 
> you mention below before commenting further.

The main problem I have with inheritance is that the unique constraints
are only checked per-table.  If I have:

  CREATE TABLE a ( id INTEGER NOT NULL PRIMARY KEY );
  CREATE TABLE b ( ) inherits a;
  INSERT INTO a VALUES (1);

I can run this without anything having any complaints.

  INSERT INTO b
SELECT * FROM a;

If I'm careful about getting data into the database then this isn't a
problem, but, to me, this is exactly the thing I shouldn't have to be
careful about because it's the database's job to keep track of this.  As
I said before, I understand, from an implementation point of view, why
it has this behaviour; it just isn't very nice from a users'.


The shape example I used before was me attempting to simplify the
problem, I think I took things too far.  My original motivation for
wanting to do this sort of thing was a (sort of) stock control problem.
I've got various types of items that I want to store information about.
I want to treat the union of these different types in a uniform manner,
for example I keep track of whether they are in their canonical location
or are "checked out" and in use.

To do this I had something like:

  CREATE TABLE books ( id SERIAL PRIMARY KEY, name TEXT );
  CREATE TABLE computer ( id SERIAL PRIMARY KEY, name TEXT );

  CREATE TABLE stock (
id SERIAL PRIMARY KEY,
tag INTEGER NOT NULL,
bookid INTEGER REFERENCES books
  CHECK ((tag = 1) = (bookid IS NOT NULL)),
computerid INTEGER REFERENCES computer
  CHECK ((tag = 2) = (computerid IS NOT NULL)),
barcode TEXT UNIQUE
  );

This, for example, allows me to rely on the database to check that the
barcode uniquely identifies each piece of equipment.  It also doesn't
require touching more than is needed when scanning for the actual item's
detail as the tag is there to specify where to look.  We've recently
added barcodes to non-"stock" items and it took me a while to realise
that a similar rotation of things allows me to store a single barcode
in a similar way.  For example, I could drop the barcode from the stock
table and table like:

  CREATE TABLE barcodes (
id SERIAL PRIMARY KEY,
tag INTEGER NOT NULL,
stockid INTEGER REFERENCES stock
  CHECK ((tag = 1) = (stockid IS NOT NULL)),
plateid INTEGER REFERENCES testplates
  CHECK ((tag = 2) = (plateid IS NOT NULL)),
barcode TEXT UNIQUE
  );

and I can get back to wherever I want to.  The annoyance is that it's
a bit of a fiddle, schema wise, to do this translation.  So that's my
reason for asking if there were other ways of doing this sort of thing.


  Sam

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Coercing compound types to use generic ROW comparison operators

2007-10-12 Thread Merlin Moncure
On 10/11/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> Randall Lucas <[EMAIL PROTECTED]> writes:
> > Still, this would fail in a nested situation because it wouldn't
> > recurse (if col1 of the compound type were another compound type,
> > ferinstance), as would your suggestion above.  It might be worthwhile
> > to allow choosing to use the default ROW comparison operator at
> > composite type creation (which would provide a more elegant solution to
> > nested situations).
>
> You are incorrectly supposing that there *is* such an animal as a
> default row comparison operator --- actually, ROW() = ROW() is expanded
> at parse time into field-by-field comparisons.  This is usually a good
> thing since it gives the planner more flexibility.

AIUI, the biggest problem with the current behavior is that there is
no way to usefully index composite types, it looks like

create index bar_idx on bar(f);
create index bar_idx on bar((f).*);
create index bar_idx on bar((f).a, (f).b);

are all invalid. the only way to do it that i can see is to create a
separate function for each field of the composite you want to index.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of memory with ODBC

2007-10-12 Thread Relyea, Mike
> From: Richard Huxton [mailto:[EMAIL PROTECTED] 
> 
> Relyea, Mike wrote:
>> If I execute this query in PGAdmin III it runs without any errors and
returns no records.
> 
> Hmm. So what's different about the queries?
> 


Nothing.  The SQL is identical.  I copied out of the log file and pasted
into PGAdmin.



> [145.188]ERROR from backend during send_query: 'SERROR'
> [145.188]ERROR from backend during send_query: 'C53200'
> [145.188]ERROR from backend during send_query: 'Mout of memory'
> [145.188]ERROR from backend during send_query: 'DFailed on 
> request of size 16.'
> [145.188]ERROR from backend during send_query: 'Faset.c'
> [145.188]ERROR from backend during send_query: 'L712'
> [145.188]ERROR from backend during send_query: 'RAllocSetAlloc'
> 
> OK, so this seems to be a server-side error, which means 
> something should be in the server logs. Is there anything?

I've pasted below what I found immediately before the error.

> 
> Oh, and I'd probably split that query into about a dozen 
> smaller ones - one per statement.

What do you mean one per statement?  One per transaction?






TopMemoryContext: 475184 total in 11 blocks; 12016 free (27 chunks);
463168 used
Local Buffer Lookup Table: 8192 total in 1 blocks; 1776 free (0 chunks);
6416 used
TopTransactionContext: 122880 total in 4 blocks; 14064 free (5 chunks);
108816 used
Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks);
6392 used
MessageContext: 1946198040 total in 258 blocks; 26624 free (43 chunks);
1946171416 used
JoinRelHashTable: 516096 total in 6 blocks; 169496 free (9 chunks);
346600 used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
JoinRelHashTable: 8192 total in 1 blocks; 4864 free (0 chunks); 3328
used
Operator class cache: 8192 total in 1 blocks; 4872 free (0 chunks); 3320
used
smgr relation table: 24576 total in 2 blocks; 16080 free (4 chunks);
8496 used
TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks);
16 used
Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 8176 free (1 chunks); 16 used
Relcache by OID: 8192 total in 1 blocks; 1816 free (0 chunks); 6376 used
CacheMemoryContext: 1183288 total in 20 blocks; 378352 free (1 chunks);
804936 used
MMCommonMeasurementsID_idx: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
MMColorID_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
tblManualMeasurements_pkey: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
tblTestResultsComments_pkey: 1024 total in 1 blocks; 392 free (0
chunks); 632 used
tblTestTypes_pkey: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
tblZones_pkey: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
PrintersModelID: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
tblPrinters_pkey: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
tblSuppliers_pkey: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
CIMachineID_idx: 1024 total in 1 blocks; 352 free (0 chunks); 672 used
CIMachineIDColorID_idx: 1024 total in 1 blocks; 328 free (0 chunks); 696
used
CIColorID_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
tblCartridgeInfo_pkey: 1024 total in 1 blocks; 392 free (0 chunks); 632
used
CMMachineID_idx: 1024 total in 1 blocks; 352 free (0 chunks); 672 used
CMMachineIDPrintCopyID_idx: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
tblCommonMeasurements_pkey: 1024 total in 1 blocks; 392 free (0 chunks);
632 used
pg_toast_101745681_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_toast_101745676_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_toast_101745671_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_toast_101745666_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_toast_101745661_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_toast_101745654_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_toast_101745649_index: 1024 total in 1 blocks; 328 free (0 chunks);
696 used
pg_toast_101745644_index: 1024 total in 1 blocks; 328 free (

Re: [GENERAL] Out of memory with ODBC

2007-10-12 Thread Relyea, Mike
> From: Richard Huxton [mailto:[EMAIL PROTECTED] 
> Relyea, Mike wrote:
> > This seems to be a problem with the ODBC driver?  How can I narrow 
> > that down further?
> 
> Well, first make 100% certain the query being executed is the 
> one you see being sent from Access.
> 
> If so, the next obvious thing would be to set up an empty 
> database with the same structure as your live one and try the 
> query on that. If *that* crashes too then you can rule out 
> any data processing problems.
> 
> It will also let you turn statement logging on in that 
> database (ALTER DATABASE SET ...) and capture everything the 
> ODBC driver sends. There might be something that leaps out at 
> you. Take all of those statements and put them into a 
> text-file and run them using psql -f . That should 
> cause the same crash.
> 
> If it does, it means you have a self-contained test-case that 
> someone else can look at for you. We can also test it on 
> Linux/BSD etc.
> 
> Then, if you still can't see where the problem is, replace 
> the ODBC driver with a different (in your case older) version 
> and see if you still get the problem. Might be worth going 
> back a few versions too, to see if this is something 
> introduced recently.
> 
> http://www.postgresql.org/ftp/odbc/versions/msi/
> 
> Another option would be to try the odbc-ng project from 
> Command Prompt and see if that does the same thing. I believe 
> that's a completely separate code-base.
> 
> https://projects.commandprompt.com/public/odbcng/wiki/Downloads
> 
> 
> Remember, *something* in the sequence of commands that get 
> executed from Access must be different than when you execute 
> them through pgAdmin.

Thanks.  I'll try that and see what happens.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] DB errors

2007-10-12 Thread Scott Marlowe
On 10/12/07, Akash Garg <[EMAIL PROTECTED]> wrote:
> We had a crash of one of our db systems last night.  After doing a fsck of
> he file system and getting the db backup, we're getting a lot of these
> messages in our logs.  The DB will also occasionally crash now.
>
> Oct 12 07:40:16 postgres[30770]: [3-1] 2007-10-12 07:40:16 PDTERROR:  could
> not access status of transaction 2259991368
> Oct 12 07:40:16 postgres[30770]: [3-2] 2007-10-12 07:40:16 PDTDETAIL:  Could
> not read from file "pg_clog/086B" at offset 73728: Success.

Got a recent backup?

IT sounds like your db is corrupted, and while recovering some of the
data in it might be possible, it would be better to start over with a
clean backup.

Note that if your database acts like this after a crash, you are
likely running with unreliable fsyncs on your system (either they were
turned on in postgresql.conf or your hardware lies to the OS).  That's
bad.  A properly built postgresql server should be capable of
recovering from this with no corruption.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Query problem

2007-10-12 Thread Clemens Schwaighofer


On 2007/10/12, at 23:22, Pavel Stehule wrote:





you get 0 rows. [EMAIL PROTECTED] isn't anywhere and [EMAIL PROTECTED] cannot do
pair with any. else 0 * 1 = 0


Thanks, go it now. Basically very simple.

I probably need to approach the problem different. So both are read  
"independent" or something like this.


--
[ Clemens Schwaighofer  -=:~ ]
[ TEQUILA\ Japan IT Engineer ]
[6-17-2 Ginza Chuo-ku, Tokyo 104-0061, JAPAN ]
[ Tel: +81-(0)3-3545-7703Fax: +81-(0)3-3545-7343 ]
[ http://www.tequila.co.jphttp://www.tbwajapan.co.jp ]



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] replicating to a stopped server

2007-10-12 Thread Joao Miguel Ferreira
Hello,

I have a 'strange' situation:

I need to make a replica copy of my database to a reduntant
spare
computer.

The reduntant computer is not running postgres, but postgres is
installed. The redundant computer is running in a special
run-level (I'm
talking Linux here) in which Pg is _not_ running.

When the primary computer crashes the redundant one will be
rebooted in
'normal' mode and Postgres must be started with the databases
from the
replica.

a) So... how do I replicate a database to a stopped postgres ?

b) Is it safe just to copy the /var/lib/pg/* directories to the
right
place and let Pg boot on that ?

c) I know the right tool for this should be 'pg_dump' but it
needs a
live postgres daemon running, in order to install the replica.
Is this
correct ?

d) Is it viable to start postgres directlly from the dump ? by
specifying the dump-file in the cmd line ?


thx a lot
joao






---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Detailed logging, but only for one user?

2007-10-12 Thread Richard Huxton

Michael Nolan wrote:

Is there any way to enable detailed logging (ie, at the SQL statement level)
but only for one user?


ALTER ROLE SET  = ;
ALTER USER SET  = ;

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] [OT] xkcd - A webcomic of romance, sarcasm, math, and language

2007-10-12 Thread Jack Orenstein

Erik Jones wrote:

On Oct 12, 2007, at 2:40 PM, John D. Burger wrote:


DB-related humor:

  http://xkcd.com/327/


Sanitize database inputs, by all means, but also use prepared statements.

Jack Orenstein

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [OT] xkcd - A webcomic of romance, sarcasm, math, and language

2007-10-12 Thread Erik Jones

On Oct 12, 2007, at 2:40 PM, John D. Burger wrote:


DB-related humor:

  http://xkcd.com/327/


Yes, there have been many great xkcd comics, but that one should go  
down in history.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Guideline on use of temporary tables

2007-10-12 Thread Jimmy Choi
I'm looking for general guideline on the use of temporary tables.

I would like to use temporary table as a caching mechanism to speed up
queries within the same session. Specifically, a temporary table is
created to store a subset of data from a possibly large table, and
subsequent queries select from the temporary table instead of
re-applying the same complex filters on the actual table again and
again.

Is this what temporary table is designed for? Are there caveats that I
should be aware of? Can you think of other better alternatives?

Thank you very much.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] can I define own variables?

2007-10-12 Thread Andreas

Hi,

can I define connection-global variables within a ODBC connection ?

Like that:
connect to the DB
set my_user_id = 42
Later a trigger would set a timestamp and the user-id when a record gets 
updated.


obviously different connections would need differnt variable-values.

the variable should only live until the connection gets terminated.

this user-id represent users of my application and I can't use postgres' 
internal user-ids because I have the data sitting in a operational 
server and another one for developing and testing, so the postgres ids 
arent necessarily consistent between the 2 server-systems.
My application has it's own user management and those keys are used for 
rel. integrity.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] can I define own variables?

2007-10-12 Thread brian

Andreas wrote:

Hi,

can I define connection-global variables within a ODBC connection ?

Like that:
connect to the DB
set my_user_id = 42
Later a trigger would set a timestamp and the user-id when a record gets 
updated.


obviously different connections would need differnt variable-values.

the variable should only live until the connection gets terminated.

this user-id represent users of my application and I can't use postgres' 
internal user-ids because I have the data sitting in a operational 
server and another one for developing and testing, so the postgres ids 
arent necessarily consistent between the 2 server-systems.
My application has it's own user management and those keys are used for 
rel. integrity.




I can't remember where i got this. It was probably this list or the 
General Bits newsletter [1].


CREATE OR REPLACE FUNCTION set_id(name text, val INT) RETURNS text AS $$
if ($_SHARED{$_[0]} = $_[1])
{
return 'ok';
}
else
{
return "can't set shared variable $_[0] to $_[1]";
}

$$ LANGUAGE plperl;

CREATE OR REPLACE FUNCTION get_id(name text) RETURNS INT IMMUTABLE AS $$
return $_SHARED{$_[0]};
$$ LANGUAGE plperl;

SELECT set_id('my_user_id', 42);

SELECT CAST(get_id('my_user_id') AS INT);

[1] http://www.varlena.com/GeneralBits/

brian

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] can I define own variables?

2007-10-12 Thread Sam Mason
On Sat, Oct 13, 2007 at 12:18:45AM +0200, Andreas wrote:
> can I define connection-global variables within a ODBC connection ?

You could use a temporary table, keyed on the variable name.  Not
particularly nice to use, but it works.  You can create accessor
functions reasonably easily if you want.

A real programming language server side would be great though!


  Sam

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] can I define own variables?

2007-10-12 Thread Tom Lane
Sam Mason <[EMAIL PROTECTED]> writes:
> On Sat, Oct 13, 2007 at 12:18:45AM +0200, Andreas wrote:
>> can I define connection-global variables within a ODBC connection ?

> You could use a temporary table, keyed on the variable name.  Not
> particularly nice to use, but it works.  You can create accessor
> functions reasonably easily if you want.

Another possibility, if you only need to store and retrieve values
and not do anything especially interesting with them, is to abuse
the "custom GUC variable" facility.  This is meant to provide
placeholder parameter settings for dynamically-loaded extension
modules, but there's nothing stopping you from setting and reading
a variable that in fact will never be used by any extension module.

To do this, you need to set custom_variable_classes in postgresql.conf,
perhaps
custom_variable_classes = user_vars

and then you can do things like

SET user_vars.foo = whatever;
SHOW user_vars.bar;

Of course, this is a hack of the first water, and you should expect
that it might break sometime in the future.  But I don't think we'd
break it without providing some alternative solution.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] convert binary string to datum

2007-10-12 Thread Ron Peterson
How does one convert an octet string (e.g. something like a varlena
structure) to a Datum?  I want to create datums for use w/
heap_form_tuple in a function returning a tuple containing bytea
representations of very large integers.

TIA

-- 
Ron Peterson
https://www.yellowbank.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] can I define own variables?

2007-10-12 Thread Guy Rouillier

Tom Lane wrote:

Sam Mason <[EMAIL PROTECTED]> writes:

On Sat, Oct 13, 2007 at 12:18:45AM +0200, Andreas wrote:

can I define connection-global variables within a ODBC connection ?



You could use a temporary table, keyed on the variable name.  Not
particularly nice to use, but it works.  You can create accessor
functions reasonably easily if you want.


Another possibility, if you only need to store and retrieve values
and not do anything especially interesting with them, is to abuse
the "custom GUC variable" facility.  This is meant to provide
placeholder parameter settings for dynamically-loaded extension
modules, but there's nothing stopping you from setting and reading
a variable that in fact will never be used by any extension module.

To do this, you need to set custom_variable_classes in postgresql.conf,
perhaps
custom_variable_classes = user_vars

and then you can do things like

SET user_vars.foo = whatever;
SHOW user_vars.bar;

Of course, this is a hack of the first water, and you should expect
that it might break sometime in the future.  But I don't think we'd
break it without providing some alternative solution.


I've had several occasions when a user-defined variable would have come 
in handy.  What is the scope of user_vars as you've defined them above? 
 Are they unique to a connection?  A user_id?  Or defined globally? 
Ideally, they would be connection-unique.  One place they would come in 
really handy is with web-apps using connection pooling.  I could stuff 
the userid of the person who connected to my secure website into a 
user_var, then the database could access that to track user actions. 
Right now, that is cumbersome to do, since the current_user is always 
the same: the userid for the connection pool.


I'm actually surprised that such a common usage scenario has not forced 
connection-unique user-defined variables into the spec.


--
Guy Rouillier

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] can I define own variables?

2007-10-12 Thread Tom Lane
Guy Rouillier <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Another possibility, if you only need to store and retrieve values
>> and not do anything especially interesting with them, is to abuse
>> the "custom GUC variable" facility.

> I've had several occasions when a user-defined variable would have come 
> in handy.  What is the scope of user_vars as you've defined them above? 

They'd act like any other GUC variable.  The scope of a SET would be the
current connection, but there are other ways to set them with larger scope.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] convert binary string to datum

2007-10-12 Thread Ron Peterson
2007-10-12_22:22:32-0400 Ron Peterson <[EMAIL PROTECTED]>:
> How does one convert an octet string (e.g. something like a varlena
> structure) to a Datum?  I want to create datums for use w/
> heap_form_tuple in a function returning a tuple containing bytea
> representations of very large integers.

Is this a legitimate/blessed way to go about it?

aval = (bytea *)palloc( len + VARHDRSZ );
VARATT_SIZEP(aval) = len + VARHDRSZ;
memcpy( VARDATA(aval), myrawdata, len );

values[0] = PointerGetDatum(aval);

...etc

tuple = heap_formtuple( tupdesc, values, &isNull );

-- 
Ron Peterson
https://www.yellowbank.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] contrib / fuzzystr documentation

2007-10-12 Thread brien colwell
hi all,

Is there documentation for the contrib / fuzzystr module? I haven't been
able to find it ...

Thanks!


Re: [GENERAL] convert binary string to datum

2007-10-12 Thread Gregory Stark
"Ron Peterson" <[EMAIL PROTECTED]> writes:

> Is this a legitimate/blessed way to go about it?
>
> aval = (bytea *)palloc( len + VARHDRSZ );
> VARATT_SIZEP(aval) = len + VARHDRSZ;
> memcpy( VARDATA(aval), myrawdata, len );
> values[0] = PointerGetDatum(aval);
> ...etc
> tuple = heap_formtuple( tupdesc, values, &isNull );

Yes, assuming that your tuple descriptor there does in fact have a varlena
data type in column 1. And normally you would define your own datatype and not
use bytea. Personally I'm not entirely clear why we don't just use void* for
text and bytea though.

Postgres 8.3 has a different macro api here though. If you want to
future-proof your code you could do (put the macro definition somewhere in
your private header file after including postgres.h).

#ifndef SET_VARSIZE
#define SET_VARSIZE(v,l) (VARATT_SIZEP(v) = (l))
#endif

aval = (bytea *)palloc( len + VARHDRSZ );
SET_VARSIZE(aval, len + VARHDRSZ);
memcpy( VARDATA(aval), myrawdata, len );
values[0] = PointerGetDatum(aval);
...etc
tuple = heap_formtuple( tupdesc, values, &isNull );

Also, make sure you use VARSIZE to refer to the size header at all times,
don't refer to it directly. And unless you mark it with storage plain always
detoast it before working with an argument or anything from heap_deform_tuple.
In postgres we normally put pg_detoast_datum() directly into the DatumGetFoo()
and PG_GETARG_FOO_P() macros.


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly