Re: [GENERAL] Exception 0xC0000005

2009-09-25 Thread Craig Ringer

On 25/09/2009 12:45 AM, Grzegorz Jaśkiewicz wrote:



On Thu, Sep 24, 2009 at 5:34 PM, Ms swati chande mailto:swat...@yahoo.com>> wrote:

 >SIGSEGV in micro$oft world.
Thanks for your response.
How can this be resolved?

Just like others said, upgrade to 8.4.1 first.


If you still have the problem once you're running 8.4.1, *then* one of 
the options you have is to obtain some more information about how the 
crash occurs, though it's not trivial to do so. See:


http://wiki.postgresql.org/index.php?title=Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Windows

Alternately, and probably more easily, you can produce a self-contained 
test case - in this case, probably a .sql file that when run on a newly 
created blank database will result in the crash.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Oliver Kohll - Mailing Lists

On 25 Sep 2009, at 07:22, InterRob  wrote:


I guess it IS quite overengineered indeed...

What I'm trying to do is to facilitate different fieldwork  
methodologies for archaeological research (on project basis); there  
is no final agreement on data structure and semantics; however, on a  
meta-level all choices are rational and can be modelled... Infact,  
all models can be related to each other: that's where the "hybrid"  
part comes in: I wish to implement the common denominator (90%) and  
then further extend this, enabing specific data model  
implementations -- including checks for data integrity.



Hi Rob,

Just wondering if you've considered rapid prototyping of the core of  
it to try and gain consensus by giving people something they can see  
and talk about, as an alternative to doing a lot of design work up  
front?


Regards
Oliver Kohll
www.gtwm.co.uk - company
www.gtportalbase.com - product



Re: [GENERAL] Looking for way to replicate master db to multiple mobile databases

2009-09-25 Thread Cédric Villemain
Le vendredi 25 septembre 2009, Selena Deckelmann a écrit :
> On Thu, Sep 24, 2009 at 8:09 AM, Cédric Villemain
> 
>  wrote:
> > Le jeudi 24 septembre 2009, Selena Deckelmann a écrit :
> >> Hi!
> >>
> >> On Wed, Sep 23, 2009 at 10:11 AM, Bryan Montgomery 
> >
> > wrote:
> >> > Hi,
> >> > I'm looking for a way to replicate am master database to multiple
> >> > (100+) databases that are taken in to the field. Currently for each
> >> > laptop we dump and load the tables. However,there is only a small
> >> > percentage of data that changes on a frequent basis.
> >> >
> >> > I've been looking around and come across pyerplica, londiste and
> >> > bucardo - the documentation on most of these is fairly sparse. It
> >> > seems that Bucardo may be the best bet - at least initially.
> >>
> >> Bucardo is a good choice for this usage model because it was
> >> originally designed to work over a lossy network connections.
> >
> > yes, but isn't bucardo designed to 2 nodes only ?
> 
> No, definitely not!  You can replicate to any number of systems. And
> you can group them in whatever groups you'd like.  Multi-master (as
> Joshua said) only works between two nodes, but master->slave can be
> from a master, to any number of slaves.
> 

Ah! thank you for clarifying that.


Cédric Villemain
Administrateur de Base de Données
Cel: +33 (0)6 74 15 56 53
http://dalibo.com - http://dalibo.org


signature.asc
Description: This is a digitally signed message part.


[GENERAL] Delphi connection ?

2009-09-25 Thread Nico Callewaert
Hi List,

I tried already in the ODBC list, but without success...

I have to develop a client/server application, heavily transaction oriented, 
that will serve around 100 users and database traffic will be intense (lot's of 
reads, updates, inserts).
Development environment is Delphi 2007.  I know there are a few commercial 
components available, but I was wondering if ODBC could do the job ?  
So, my question is, if ODBC is intended to be used for that ?  Many simultanous 
connections, lots of inserts, updates ?
The thing you always hear about ODBC is, that it is very slow ?

Many thanks in advance,
Best regards,

Nico Callewaert

Re: [GENERAL] About logging

2009-09-25 Thread S Arvind
ok thanks Magnus from saving my time in googling

-Arvind S

"Many of lifes failure are people who did not realize how close they were to
success when they gave up."
-Thomas Edison


On Fri, Sep 25, 2009 at 12:39 AM, Magnus Hagander wrote:

> On Thu, Sep 24, 2009 at 21:06,   wrote:
> > Is it possible to log two different information to two different file.
> > Bascially i need to log all the mod statement in one log csv file and all
> > the queries running more then 2mins in another csv log file. As i enabled
> > both it will be doing both in single file rt . Is there any way to split
> > both???
>
> This is not currently possible inside PostgreSQL, you will need to do
> external post-processing of the logfile to get that.
>
>
> --
>  Magnus Hagander
>  Me: http://www.hagander.net/
>  Work: http://www.redpill-linpro.com/
>


[GENERAL] ERROR: invalid input syntax for integer: ""

2009-09-25 Thread Rakotomandimby Mihamina

Hi all,
I get invalid input syntax when UPDATEing my radacct table:

freedsl=# \d radacct
Table "public.radacct"
Column|   Type   |  
Modifiers
--+--+-
 radacctid| bigint   | not null default 
nextval('radacct_radacctid_seq'::regclass)
 acctsessionid| character varying(32)| not null
 acctuniqueid | character varying(32)| not null
 username | character varying(253)   |
 groupname| character varying(253)   |
 realm| character varying(64)|
 nasipaddress | inet | not null
 nasportid| character varying(15)|
 nasporttype  | character varying(32)|
 acctstarttime| timestamp with time zone |
 acctstoptime | timestamp with time zone |
 acctsessiontime  | bigint   |
 acctauthentic| character varying(32)|
 connectinfo_start| character varying(50)|
 connectinfo_stop | character varying(50)|
 acctinputoctets  | bigint   |
 acctoutputoctets | bigint   |
 calledstationid  | character varying(50)|
 callingstationid | character varying(50)|
 acctterminatecause   | character varying(32)|
 servicetype  | character varying(32)|
 xascendsessionsvrkey | character varying(10)|
 framedprotocol   | character varying(32)|
 framedipaddress  | inet |
 acctstartdelay   | integer  |
 acctstopdelay| integer  |
Indexes:
"radacct_pkey" PRIMARY KEY, btree (radacctid)

The query is:


UPDATE radacct
 SET
  AcctStopTime = ('2009-09-25 10:35:44'::timestamp - '0'::interval),
  AcctSessionTime = CASE WHEN '' = '' THEN   (EXTRACT(EPOCH FROM ('2009-09-25 
10:35:44'::TIMESTAMP WITH TIME ZONE - AcctStartTime::TIMESTAMP WITH TIME ZONE   
- '0'::INTERVAL)))::BIGINT ELSE '' END,
  AcctInputOctets = (('0'::bigint << 32) + '0'::bigint),
  AcctOutputOctets = (('0'::bigint << 32) + '0'::bigint),
  AcctTerminateCause = '',
  AcctStopDelay = 0,
  FramedIPAddress = NULLIF('', '')::inet,
  ConnectInfo_stop = ''
 WHERE AcctSessionId = ''
  AND UserName = 'mihamina'
  AND NASIPAddress = '0.0.0.0'
  AND AcctStopTime IS NULL




I cannot find the problem
Any hints?
Thank you.

--
  Architecte Informatique chez Blueline/Gulfsat:
   Administration Systeme, Recherche & Developpement
   +261 34 29 155 34

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer: ""

2009-09-25 Thread Grzegorz Jaśkiewicz
On Fri, Sep 25, 2009 at 11:49 AM, Rakotomandimby Mihamina <
miham...@gulfsat.mg> wrote:

>
> I cannot find the problem
> Any hints?
>
and what's the error message ?

-- 
GJ


Re: [GENERAL] ERROR: invalid input syntax for integer: ""

2009-09-25 Thread Rakotomandimby Mihamina

09/25/2009 01:55 PM, Grzegorz Jaśkiewicz:

and what's the error message ?


  ERROR: invalid input syntax for integer: ""

(I put it in the email subject, not very clever from me)

--
  Architecte Informatique chez Blueline/Gulfsat:
   Administration Systeme, Recherche & Developpement
   +261 34 29 155 34

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer: ""

2009-09-25 Thread A. Kretschmer
In response to Rakotomandimby Mihamina :
> Hi all,
> I get invalid input syntax when UPDATEing my radacct table:
> 
> freedsl=# \d radacct
> Table "public.radacct"
> Column|   Type   |  
> Modifiers
> --+--+-
>  radacctid| bigint   | not null default 
> ...
>  acctsessiontime  | bigint   |
> ...
>   AcctSessionTime = CASE WHEN '' = '' THEN   (EXTRACT(EPOCH FROM 
>   ('2009-09-25 10:35:44'::TIMESTAMP WITH TIME ZONE - 
>   AcctStartTime::TIMESTAMP WITH TIME ZONE   - '0'::INTERVAL)))::BIGINT ELSE 
>   '' END,

test=*# select ''::int;
ERROR:  invalid input syntax for integer: ""


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Johan Nel

Hi Rob,

In a database I wish to implement a GENERIC datamodel, thus on a 
meta-level. All RELATIONS (part of a MODEL) will be a view on some base 
(being a table) JOINed with (an) extra column(s). Thus, this view 
consists of a number of FIELDS. I whish to make this view editable 
(INSERT, UPDATE) using the RULE system. Some constraints will apply; 
enforcing these is the problem I am trying to solve by modeling these 
rules with a table "RELATION_CONSTRAINTS" (see below).


Although many people will shoot it down, I follow a very similar approach. 
 Data-driven application framework with exactly what you have described. 
In short, my application will read the "Metadata" at runtime and "build" 
the application so to speak on the fly.


For this I use the following two table approach:

meta_master:
  master_no SERIAL NOT NULL PRIMARY KEY,
  master_type VARCHAR(30) NOT NULL REFERENCE master_type(master_type),
  master_id VARCHAR(30) NOT NULL,
  master_property TEXT,
  UNIQUE (master_type, master_id)

meta_link:
  link_no SERIAL NOT NULL PRIMARY KEY
  master_no REFERENCE meta_master(master_no),
  link_type NOT NULL REFERENCE master_type(master_type),
  member_no NOT NULL REFERENCE meta_master(master_no),
  member_property TEXT,
  UNIQUE (master_no, member_no)

Some explanation:
master_type and link_type have values like database, table, column etc.

Thus, at runtime a method FormLoad(nID) will make use of a recursive query 
to load everything that is needed to build the "Form" at runtime and 
associate it with the correct database, table, etc.


It is necessary to recurse all members via meta_master.master_no = 
meta_link.master_no and meta_link.member_no = meta_master.master_no (use 
connect_by() pre 8.4 or use the RECURSIVE views from 8.4)


Where applicable the %_property columns are used to define additional 
information in the format 'name=value;nextname=value;' etc.


I was thinking of implementing this using a FUNCTION that takes a 
polymorphic record parameter (and the relation name); then checking this 
record against the applicable constraint expression.
This whole idea may sound like a DBMS-in-a-DBMS kind of thing... What I 
am trying is to write as little as table/view-specific code as would be 
necessary, while still collecting all base data in one central table...


I take the %_property column even further, in my business data I have a 
property column again in tables where additional columns can be defined on 
the fly based on the metadata, until such a time that users have a clear 
picture of what they exactly need.  This is also used in tables where 
multi-company differences makes it almost impossible to have not more than 
 50% of a table's columns redundant.


If you need more info regarding this approach, feel free to contact me in 
private.


Johan Nel
Pretoria, South Africa.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lazy vacuum and AccessExclusiveLock

2009-09-25 Thread Jaromír Talíř
Alvaro Herrera píše v Čt 24. 09. 2009 v 14:41 -0400:
> Jaromír Talíř wrote:
> 
> > we are facing strange situation with exclusively locked table during
> > normal lazy vacuum. There is one big table (66GB) that is heavily
> > inserted and updated in our database. Suddenly (after backup and delete
> > of almost all records) we are not able to run VACUUM over this table
> > because after 50 minutes of work it allocate AccessExclusiveLock on this
> > table and all other connections start to timeout.
> 
> What version are you running?

We are running 8.3.5 on Ubuntu LTS 8.04.

Here is confirmation of lock from sql:

"SELECT a.datname, c.relname, l.mode, l.granted, a.usename, age(now(),
a.query_start) as "age", l.pid, a.current_query FROM pg_stat_activity a
JOIN pg_locks l ON (l.pid = a.procpid) LEFT OUTER JOIN pg_class c ON
(l.relation = c.oid) WHERE l.pid!=pg_backend_pid() ORDER BY
a.query_start"

datname |  relname   |   mode   | granted | usename  |
age   |  pid  |
current_query   



 
-++--+-+--+-+---+-
 fred| action_xml | ShareUpdateExclusiveLock | t   | postgres | 
00:50:49.708796 | 15799 | VACUUM ANALYZE VERBOSE action_xml ;
 fred| action_xml | AccessExclusiveLock  | t   | postgres | 
00:50:49.708796 | 15799 | VACUUM ANALYZE VERBOSE action_xml ;
 fred| action_xml | RowExclusiveLock | f   | rifd | 
00:00:54.987454 | 28815 | INSERT INTO Action_XML   VALUES (
...

Here is log of VACUUM VERBOSE. At the end we have to kill it because we
cannot afford to block normal connections:

fred=# VACUUM ANALYZE VERBOSE action_xml ;
INFO:  vacuuming "public.action_xml"
INFO:  scanned index "action_xml_pkey" to remove 4722451 row versions
DETAIL:  CPU 2.62s/3.41u sec elapsed 41.56 sec.
INFO:  "action_xml": removed 4722451 row versions in 4722024 pages
DETAIL:  CPU 113.50s/40.13u sec elapsed 1162.88 sec.
INFO:  index "action_xml_pkey" now contains 5993747 row versions in 250663 pages
DETAIL:  4722451 index row versions were removed.
234178 index pages have been deleted, 221276 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "action_xml": found 8091937 removable, 6006252 nonremovable row versions 
in 8397120 pages
DETAIL:  12739 dead row versions cannot be removed yet.
There were 80712079 unused item pointers.
8397120 pages contain useful free space.
0 pages are entirely empty.
CPU 284.46s/109.26u sec elapsed 2994.64 sec.
Cancel request sent



-- 
Jaromir Talir
technicky reditel / Chief Technical Officer
---
CZ.NIC, z.s.p.o.  --.cz domain registry
Americka 23, 120 00 Praha 2, Czech Republic
mailto:jaromir.ta...@nic.cz  http://nic.cz/
sip:jaromir.ta...@nic.cz tel:+420.222745107
mob:+420.739632712   fax:+420.222745112
---


smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] close inactive client connection

2009-09-25 Thread Luiz Bernardi
Hello,


 I am developing a system, using the ZeosLib, which maintains a permanent connection to the database.


 When the client loses the connection, the server does
not recognize this and keeps the connection as if it were active. When
the client attempts to reconnect, the server creates a new connection
and the previous remains active. 
 This has caused an excessive increase in the number of active connections and hampered the use of the system.


 Have any way or setting to make postgres close idle connections?
-- 
Luiz Agnaldo BernardiFone 41
36755756 41 99979849


[GENERAL] close inactive client connection

2009-09-25 Thread Luiz Bernardi


Hello,  

 I am developing a system, using the ZeosLib, which maintains a 
permanent connection to the database.  


 When the client loses the connection, the server does not recognize 
this and keeps the connection as if it were active. When the client 
attempts to reconnect, the server creates a new connection and the 
previous remains active.


 This has caused an excessive increase in the number of active 
connections and hampered the use of the system.  


 Have any way or setting to make postgres close idle connections?
--  
Luiz Agnaldo Bernardi
Fone 41 36755756
 41 99979849

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer: ""

2009-09-25 Thread Tom Lane
Rakotomandimby Mihamina  writes:
> I get invalid input syntax when UPDATEing my radacct table:

It's the ELSE '' here:

>AcctSessionTime = CASE WHEN '' = '' THEN   (EXTRACT(EPOCH FROM 
> ('2009-09-25 10:35:44'::TIMESTAMP WITH TIME ZONE - AcctStartTime::TIMESTAMP 
> WITH TIME ZONE   - '0'::INTERVAL)))::BIGINT ELSE '' END,

FWIW, 8.4 will give you a syntax error pointer in cases like this.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] repeatedly process termination in PG 8.2

2009-09-25 Thread Vick Khera
On Thu, Sep 24, 2009 at 10:31 AM, Michael Molz
 wrote:
> one  of our systems on a PG 8.2 database crashes nearly daily; others are 
> running fine. Host OS of this system is Windows, if the db crashes nothing is 
> written in the OS event logs. The database log show every time the following 
> messages:
>        2009-09-23 10:21:30 LOG:  server process (PID 1240) exited with exit 
> code -1073741819
>        2009-09-23 10:21:30 LOG:  terminating any other active server processes
>        2009-09-23 10:21:30 WARNING:  terminating connection because of crash 
> of another server process

Your symptoms point to one of two possibilities:  1) you have hardware
faults or 2) your DB on this one machine has a corruption that causes
postgres to try to read some absurd value of a pointer, causing a
panic error.  However, since nothing is logged in the postgres log
complaining about  bad pointers or something similar to that, I'd vote
you have a hardware fault.

One thing you can do to test your DB is to run vacuum full on it.  If
this produce no errors, try reindexing your tables.  This causes every
single row to be read which would throw an error if the table was
corrupted.  Of course, on reindex you could just clear your problem if
an index itself was corrupted.  If you do get an error during either
of these steps, you should reboot, then re-run the same command to see
if the error is in the exact same place every time.  If not, then that
would be a very strong indicator of hardware fault.  Run your machine
hardware diagnostics. If still nothing comes up, replace the machine.
:-(

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't find SRPMs for PG 8.1.18 on RHEL4

2009-09-25 Thread Justin Pasher

Devrim GÜNDÜZ wrote:

On Thu, 2009-09-24 at 15:43 -0500, Justin Pasher wrote:
  
I'm having trouble finding the source RPMs for PostgreSQL 8.1.18 on 
RHEL4. I've tried looking in the following places with no luck (I can 
only find the regular RPMs).


http://yum.pgsqlrpms.org/8.1/redhat/rhel-4-i386/



...because that URL is for binary packages. Please visit here:

http://yum.pgsqlrpms.org/srpms/8.1/redhat/rhel-4-i386/repoview/postgresql.htm


Thanks. That's what I needed.

Our of curiosity, how do you actually get to that links from the web 
site? I always seem to have trouble finding the link to the SRPMs tree.



--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UPDATE statement with syntax error doesn't raise a warning?

2009-09-25 Thread Mirko Pace
I've ran an update statement like this (obviously wrong, I know!):

update my_table
  set boolean_field = true AND
  my_notes = 'something'
where id in
   (select id from my_table order by random() limit 4000);

in my psql client and I had a "UPDATE 4000" result but, correctly, anything
was changed in my_table.

So... why pg didn't raise a warning about syntax issue?

Thank you!

Mirko

P.S.
PostgreSQL 8.3.8 on i686-pc-linux-gnu, compiled by GCC gcc (Debian
4.3.2-1.1) 4.3.2
psql 8.3.8 (Ubuntu package)


Re: [GENERAL] Delphi connection ?

2009-09-25 Thread Andy Colson

Nico Callewaert wrote:

Hi List,
 
I tried already in the ODBC list, but without success...
 
I have to develop a client/server application, heavily transaction 
oriented, that will serve around 100 users and database traffic will be 
intense (lot's of reads, updates, inserts).
Development environment is Delphi 2007.  I know there are a few 
commercial components available, but I was wondering if ODBC could do 
the job ? 
So, my question is, if ODBC is intended to be used for that ?  Many 
simultanous connections, lots of inserts, updates ?

The thing you always hear about ODBC is, that it is very slow ?
 
Many thanks in advance,

Best regards,
 
Nico Callewaert


I don't know about odbc, never used it to hit PG.  I use Delphi and 
wrote a simple object on top of the libpq api.  The api is simple to 
use.  If odbc doesn't work out for you I'd recommend using libpq direct.


-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE statement with syntax error doesn't raise a warning?

2009-09-25 Thread Andy Colson

Mirko Pace wrote:

I've ran an update statement like this (obviously wrong, I know!):

update my_table
  set boolean_field = true AND
  my_notes = 'something'
where id in
   (select id from my_table order by random() limit 4000);

in my psql client and I had a "UPDATE 4000" result but, correctly, 
anything was changed in my_table.


So... why pg didn't raise a warning about syntax issue?

Thank you!

Mirko

P.S.
PostgreSQL 8.3.8 on i686-pc-linux-gnu, compiled by GCC gcc (Debian 
4.3.2-1.1) 4.3.2

psql 8.3.8 (Ubuntu package)


are you sure its wrong?

maybe pg looked at it like:
boolean_field = (true AND my_notes = 'something')


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE statement with syntax error doesn't raise a warning?

2009-09-25 Thread Tim Landscheidt
Mirko Pace  wrote:

> I've ran an update statement like this (obviously wrong, I know!):

> update my_table
>   set boolean_field = true AND
>   my_notes = 'something'
> where id in
>(select id from my_table order by random() limit 4000);

> in my psql client and I had a "UPDATE 4000" result but, correctly, anything
> was changed in my_table.

> So... why pg didn't raise a warning about syntax issue?

Because there is no syntax error? "TRUE AND my_notes =
'something'" is a valid expression (and equivalent to
"my_notes = 'something'").

Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE statement with syntax error doesn't raise a warning?

2009-09-25 Thread David W Noon
On Fri, 25 Sep 2009 17:05:13 +0200, Mirko Pace wrote about [GENERAL]
UPDATE statement with syntax error doesn't raise a warning?:

>I've ran an update statement like this (obviously wrong, I know!):

Not so obvious.

>update my_table
>  set boolean_field = true AND
>  my_notes = 'something'
>where id in
>   (select id from my_table order by random() limit 4000);
>
>in my psql client and I had a "UPDATE 4000" result but, correctly,
>anything was changed in my_table.

I presume you meant "nothing" rather than "anything".

>So... why pg didn't raise a warning about syntax issue?

If we add some redundant parentheses, the SET clause becomes:

SET boolean_field = (true AND my_notes = 'something')

As you can see, this is a valid Boolean expression. There is no syntax
error. It just doesn't mean what you wanted.
-- 
Regards,

Dave  [RLU #314465]
===
david.w.n...@ntlworld.com (David W Noon)
===

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2009-09-25 Thread Jerzy Bialas
Hi,
 
 I'm coming from Sybase ASE and Transact SQL.
Apart from long time desire to see true stored procedures in pl/pgsql,
with integer status and many arbitrary resultsets, I have one specific question:
is it possible to rename columns in rowset returned from function declared
as "returns table(...)" ?
 
As far as I know, the column names are always the same as in table(...)
declaration, even if 'AS "name"' is used in internal query.
 
I work with billing system in telecommunication company.
We have a system to publish business events about various changes to
MOM middleware (Websphere MQ).
There are triggers in various tables, and on changes, they insert records
into special table EVENTS:
 
ID identity,
EVENT numeric(8,0),
STATUS char(1), -- R - new, S - sent

 
There is a stored procedure, called periodically by external application,
which hits this table for STATUS=R (new) records, and, depending on EVENT field,
select varius data from EVENTS and other tables and returns rowset which
is transformed by application into XML message.
To make it simple, names and values of XML elements are encoded
as column names and column values, using "column name" = value,
which is Sybase equivalent of AS "column name".
 
The whole logic is in this procedure and calling application is extremely
simple - it was basically unmodified (except minor bug fixing) by almost 10 years.
Of course, rowsets returned for different EVENT values are different - various
column numbers and value types, but this is perfectly OK in TSQL.
 
I'd like to do the same in PostgreSQL, but:
- resulting rowset must be fixed and declared in 'returns table(...)' declaration.
 I can declare all fileds "text" and convert them if needed. Ugly, butit  should work.
- Number of columns in "table()" must be the highest needed and redundant columns
 may be empty. Even more ugly, but it should work.
 
Is it possible to rename columns in returned rowset ?
If not, perhaps I need to return 2 rowsets - the first one with
element names, and the 2nd one with values.
 
Thanks in advance,
JerzyRok szkolny 2009/2010.Zobacz co cię czeka:http://klik.wp.pl/?adr=http://corto.www.wp.pl/as/rokszkolny09-10.html&sid=869




[GENERAL] stored procedure: RETURNS record

2009-09-25 Thread InterRob
Dear list,
I am trying to find out whether I can use the "record" type as a polymorphic
return type to return multiple columns, to be determined at runtime. In
fact, I'm trying to write a function that provides a generic implementation
of some deserialization of a certain field.

The prototype of the function I came up with, is as follows:
>> CREATE FUNCTION deserialize(the_table t1) RETURNS record << etc. etc. >>
;

It is intended to return multiple fields in an anonymous row: only at time
of invocation it is known what fields will this row consist of...

The function is used in the following statement:

>> SELECT * FROM (SELECT deserialize( t1 ) FROM t1) ss;

Unfortunately, this results in ONE row, with ONE column. E.g.:

MYDB=# select * from (SELECT deserialize(kvp) FROM kvp) ss;
 deserialize
---
 (1,2)
(1 row)

I guess I am seeking to prototype the anonymous row layout in the above SQL
statement?

Hope you have any idea,
regards,


Rob


Re: [GENERAL] Delphi connection ?

2009-09-25 Thread John R Pierce

Nico Callewaert wrote:

Hi List,
 
I tried already in the ODBC list, but without success...
 
I have to develop a client/server application, heavily transaction 
oriented, that will serve around 100 users and database traffic will 
be intense (lot's of reads, updates, inserts).
Development environment is Delphi 2007.  I know there are a few 
commercial components available, but I was wondering if ODBC could do 
the job ? 
So, my question is, if ODBC is intended to be used for that ?  Many 
simultanous connections, lots of inserts, updates ?

The thing you always hear about ODBC is, that it is very slow ?
 



ADO is significantly faster than ODBC, so the preferred stack would be  
delphi -> ado -> postgres ole db -> libpq ->postgres


I believe there exists a delphi->ado wrapper (at least my brief googling 
popped one up)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2009-09-25 Thread Pavel Stehule
Hello

this isn't possible now. All what you want are limited by SELECT
statement in PostgreSQL. Simply PL knows only  SELECT statement and
structure of result have to be strongly static - because it is based
on static execution plan - it is determined before query start.

Statement CALL isn't implemented yet.

Regards
Pavel Stehule

2009/9/25 Jerzy Bialas :
> Hi,
>
>  I'm coming from Sybase ASE and Transact SQL.
> Apart from long time desire to see true stored procedures in pl/pgsql,
> with integer status and many arbitrary resultsets, I have one specific
> question:
> is it possible to rename columns in rowset returned from function declared
> as "returns table(...)" ?
>
> As far as I know, the column names are always the same as in table(...)
> declaration, even if 'AS "name"' is used in internal query.
>
> I work with billing system in telecommunication company.
> We have a system to publish business events about various changes to
> MOM middleware (Websphere MQ).
> There are triggers in various tables, and on changes, they insert records
> into special table EVENTS:
>
> ID identity,
> EVENT numeric(8,0),
> STATUS char(1), -- R - new, S - sent
> 
>
> There is a stored procedure, called periodically by external application,
> which hits this table for STATUS=R (new) records, and, depending on EVENT
> field,
> select varius data from EVENTS and other tables and returns rowset which
> is transformed by application into XML message.
> To make it simple, names and values of XML elements are encoded
> as column names and column values, using "column name" = value,
> which is Sybase equivalent of AS "column name".
>
> The whole logic is in this procedure and calling application is extremely
> simple - it was basically unmodified (except minor bug fixing) by almost 10
> years.
> Of course, rowsets returned for different EVENT values are different -
> various
> column numbers and value types, but this is perfectly OK in TSQL.
>
> I'd like to do the same in PostgreSQL, but:
> - resulting rowset must be fixed and declared in 'returns table(...)'
> declaration.
>  I can declare all fileds "text" and convert them if needed. Ugly, butit
> should work.
> - Number of columns in "table()" must be the highest needed and redundant
> columns
>  may be empty. Even more ugly, but it should work.
>
> Is it possible to rename columns in returned rowset ?
> If not, perhaps I need to return 2 rowsets - the first one with
> element names, and the 2nd one with values.
>
> Thanks in advance,
> Jerzy
>
>
> 
> Rok szkolny 2009/2010.
> Zobacz co cię czeka:
> http://klik.wp.pl/?adr=http://corto.www.wp.pl/as/rokszkolny09-10.html&sid=869

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] close inactive client connection

2009-09-25 Thread John R Pierce

Luiz Bernardi wrote:
I am developing a system, using the ZeosLib, which maintains a 
permanent connection to the database.


When the client loses the connection, the server does not recognize 
this and keeps the connection as if it were active. When the client 
attempts to reconnect, the server creates a new connection and the 
previous remains active.


how does this happen ?   TCP connections don't just wander off and get lost.

This has caused an excessive increase in the number of active 
connections and hampered the use of the system.


Have any way or setting to make postgres close idle connections?


first, you'd have to identify that they were in fact 'lost' and not just 
idle.   we have apps that open a socket to postgres, issue some 
commands, then sometimes sit for hours before more commands are issued.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] close inactive client connection

2009-09-25 Thread Scott Marlowe
On Fri, Sep 25, 2009 at 6:26 AM, Luiz Bernardi  wrote:
> Hello,
>
> I am developing a system, using the ZeosLib, which maintains a permanent
> connection to the database.
>
> When the client loses the connection, the server does not recognize this and
> keeps the connection as if it were active. When the client attempts to
> reconnect, the server creates a new connection and the previous remains
> active.

Eventually the server's network stack will check to see if the
connection is alive or not by sending a tcp keepalive ping.  If it
receives no answers after x number of tries, it will close the socket
and the pg backend will be terminated.

To adjust this time, look at the tcp keepalive parameters in either
the postgresql.conf file or system wide on your server.  The default
timeout is 2 hours.  It's not unusual to drop it to 5 or 10 minutes on
busy systems.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Sam Mason
On Thu, Sep 24, 2009 at 11:07:31PM +0200, InterRob wrote:
> What I'm trying to do is to facilitate different fieldwork methodologies for
> archaeological research (on project basis); there is no final agreement on
> data structure and semantics; however, on a meta-level all choices are
> rational and can be modelled... Infact, all models can be related to each
> other: that's where the "hybrid" part comes in: I wish to implement the
> common denominator (90%) and then further extend this, enabing specific data
> model implementations -- including checks for data integrity.

I'm my experience it depends on how technically competent your users
are.  Most of mine are fine working with "their own" data files/formats
and only want the data in the database to keep track of the larger
structural stuff.

I therefore tend to just leave their data as opaque blobs (stored in
large objects, as they're reasonably small) of data and only pull out
the parts of it that are needed to keep the other parts of the projects
happy.  That way I can make sure the bits the database takes care of can
be appropriately designed and the users get to keep their data exactly as
they want.

To support this I've written various bits of code that get automatically
run when users insert their data files to pull them out into the
appropriate tables.  The detailed bits of the structure are of course
missed, but most of the time this data isn't needed and when it is they
want the rest of the original (normally proprietary binary file formats
that I've had to reverse engineer) file so that their program can figure
out what's going on.

It all depends on the problem domain of course, but this seems to work
OK for us!  I really want to hack Samba around so that the users can
view the files directly from inside the database, but I'm not sure how
good an idea this really.

The bigger datasets (1GB+) tend to be nicely structured, so they get
handled specially.

> As soon as that works, it becomes possible to record changes at row-level --
> providing access to data-snapshots in time.

I think these are what my blobs are...

> Furthermore, it becomes possible
> to build upon this central database automated tools for management and
> filing of information and different modes of data entry (including
> webbased)...

...and this is what I'd call my structural bits.

> The thing is: altering table structures (changes should be limited to adding
> columns) is required on a ad hoc basis and End User should be able to do
> so...

I generally find it's easier if I'm involved in that.  Maybe it's just
my users!

> I guess that requires some over engineering... ?

By "over engineering" I was meaning that you seem to be trying to solve
a more complicated problem than is necessary.  There will be some
essential complexity inherent in any problem, but it's the job of every
engineer (software or hardware) to ensure that only minimal amounts of
incidental complexity are introduced.


In my case the "important" thing is to make sure that we know the
state of what's going on in the projects.  I can do this by getting a
combination of data from the user (through traditional means) and by
pulling apart their data files.  The "incidental complexity" I've added,
that of writing fiddly little programs to interpret their files, seems
to be better than getting the users to input the data twice; once in
their programs and once into the database.

In your case you've introduced this strange new EAV style design and the
constraint system on top of it.  The benefits of this design may well be
better than the costs of developing it, but I have a feeling it may be
easier to "side-step" the problem somehow.

That all got a bit longer than I was expecting, but I hope it's useful!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Delphi connection ?

2009-09-25 Thread Merlin Moncure
On Fri, Sep 25, 2009 at 5:16 AM, Nico Callewaert
 wrote:
> Hi List,
>
> I tried already in the ODBC list, but without success...
>
> I have to develop a client/server application, heavily transaction
> oriented, that will serve around 100 users and database traffic will be
> intense (lot's of reads, updates, inserts).
> Development environment is Delphi 2007.  I know there are a few commercial
> components available, but I was wondering if ODBC could do the job ?
> So, my question is, if ODBC is intended to be used for that ?  Many
> simultanous connections, lots of inserts, updates ?
> The thing you always hear about ODBC is, that it is very slow ?

It's been a few years, but historically the best delphi components for
database access has been ZeosLib...native connections to most popular
databases...

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] libpq port number handling

2009-09-25 Thread Sam Mason
On Fri, Sep 25, 2009 at 09:29:24AM +0300, Peter Eisentraut wrote:
> On Thu, 2009-09-24 at 20:36 -0400, Tom Lane wrote:
> > BTW, are port numbers still limited to 16 bits in IPv6?
> 
> Port numbers are in TCP, not in IP.

I'd checked that it should work with IPv6, but I hadn't realized that
it was because ports were at a different level of abstraction.  This
mailing list is good for otherwise obscure details like that!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procedure: RETURNS record

2009-09-25 Thread Alban Hertroys

On 25 Sep 2009, at 18:34, InterRob wrote:


Unfortunately, this results in ONE row, with ONE column. E.g.:

MYDB=# select * from (SELECT deserialize(kvp) FROM kvp) ss;
 deserialize
---
 (1,2)
(1 row)

I guess I am seeking to prototype the anonymous row layout in the  
above SQL statement?



I'm not entirely sure about the syntax in your case, but I think  
you're looking for:


MYDB=# select * from (SELECT deserialize(kvp) FROM kvp) ss (a int, b  
int);


If that doesn't work, it's based on how you normally select from a  
record-returning function, namely:

MYDB=# SELECT * FROM deserialize('some string') AS ss (a int, b int);

You may need to call it like this though:
MYDB=# select * from (SELECT (deserialize(kvp)).a, (deserialize 
(kvp)).b FROM kvp) ss (a int, b int);


In that case your function better not be volatile or it will be  
evaluated twice.


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4abd04ce11682030514312!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Ron Mayer
Sam Mason wrote:
> It all depends on the problem domain of course, but this seems to work
> OK for us!  I really want to hack Samba around so that the users can
> view the files directly from inside the database, but I'm not sure how
> good an idea this really.

"hack Samba"?   Wouldn't it be easier to use one of the database-as-a
filesystem FUSE bindings and run stock samba over that?

The perl Fuse::DBI module's example  sounds pretty similar to the
system you described where he "file" seems to be a column in a table.
http://www.rot13.org/~dpavlin/fuse_dbi.html

If that doesn't suit your needs there are more elaborate ones(libferris)
that seem more complex and more flexible, and simpler ones (dumbofs) that
seem to be meant more as example code you could hack for your purposes
http://lwn.net/Articles/306860/
http://yiannnos.com/dumbofs


And then you could use unmodified samba out-of-the-box exporting
that to whatever the heck speaks SMB/CIFS these days.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] close inactive client connection

2009-09-25 Thread Luiz Bernardi


Thanks, Scott.

 I change the setting and then find out why they were losing the 
client connection.

--
Luiz Agnaldo Bernardi
Fone 41 36755756
 41 99979849



On Fri, 25 Sep 2009 11:11:52 -0600
 Scott Marlowe  wrote:
On Fri, Sep 25, 2009 at 6:26 AM, Luiz Bernardi 
 wrote:

Hello,

I am developing a system, using the ZeosLib, which maintains a 
permanent

connection to the database.

When the client loses the connection, the server does not recognize 
this and
keeps the connection as if it were active. When the client attempts 
to
reconnect, the server creates a new connection and the previous 
remains

active.


Eventually the server's network stack will check to see if the
connection is alive or not by sending a tcp keepalive ping.  If it
receives no answers after x number of tries, it will close the 
socket

and the pg backend will be terminated.

To adjust this time, look at the tcp keepalive parameters in either
the postgresql.conf file or system wide on your server.  The default
timeout is 2 hours.  It's not unusual to drop it to 5 or 10 minutes 
on

busy systems.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] close inactive client connection

2009-09-25 Thread Luiz Bernardi

hi John


I have no idea of what may be happening. The system opens a connection 
and communicate normally. But after an idle time, it loses the 
connection and new transactions return with error.
 
16/09/2009 13:39:14 - SQL Error: no connection to the server

16/09/2009 13:39:14 - SQL Error: connection not open


At this point he reconnects, the server creates a new connection that 
behaves exactly like the previous one.




--
Luiz Agnaldo Bernardi
Fone 41 36755756
 41 99979849



On Fri, 25 Sep 2009 10:03:20 -0700
 John R Pierce  wrote:

Luiz Bernardi wrote:
I am developing a system, using the ZeosLib, which maintains a 
permanent connection to the database.


When the client loses the connection, the server does not recognize 
this and keeps the connection as if it were active. When the client 
attempts to reconnect, the server creates a new connection and the 
previous remains active.


how does this happen ?   TCP connections don't just wander off and 
get lost.


This has caused an excessive increase in the number of active 
connections and hampered the use of the system.


Have any way or setting to make postgres close idle connections?


first, you'd have to identify that they were in fact 'lost' and not 
just idle.   we have apps that open a socket to postgres, issue some 
commands, then sometimes sit for hours before more commands are 
issued.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] close inactive client connection

2009-09-25 Thread Scott Marlowe
Are these machines communicating through a firewall?  Often firewalls
timeout idle tcp/ip connections.

On Fri, Sep 25, 2009 at 12:56 PM, Luiz Bernardi  wrote:
> hi John
>
>
> I have no idea of what may be happening. The system opens a connection and
> communicate normally. But after an idle time, it loses the connection and
> new transactions return with error.
>  16/09/2009 13:39:14 - SQL Error: no connection to the server
> 16/09/2009 13:39:14 - SQL Error: connection not open
>
>
> At this point he reconnects, the server creates a new connection that
> behaves exactly like the previous one.
>
>
>
> --
> Luiz Agnaldo Bernardi
> Fone 41 36755756
>  41 99979849
>
>
>
> On Fri, 25 Sep 2009 10:03:20 -0700
>  John R Pierce  wrote:
>>
>> Luiz Bernardi wrote:
>>>
>>> I am developing a system, using the ZeosLib, which maintains a permanent
>>> connection to the database.
>>>
>>> When the client loses the connection, the server does not recognize this
>>> and keeps the connection as if it were active. When the client attempts to
>>> reconnect, the server creates a new connection and the previous remains
>>> active.
>>
>> how does this happen ?   TCP connections don't just wander off and get
>> lost.
>>
>>> This has caused an excessive increase in the number of active connections
>>> and hampered the use of the system.
>>>
>>> Have any way or setting to make postgres close idle connections?
>>
>> first, you'd have to identify that they were in fact 'lost' and not just
>> idle.   we have apps that open a socket to postgres, issue some commands,
>> then sometimes sit for hours before more commands are issued.
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
When fascism comes to America, it will be intolerance sold as diversity.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About logging

2009-09-25 Thread Scott Marlowe
On Thu, Sep 24, 2009 at 1:09 PM, Magnus Hagander  wrote:
> On Thu, Sep 24, 2009 at 21:06,   wrote:
>> Is it possible to log two different information to two different file.
>> Bascially i need to log all the mod statement in one log csv file and all
>> the queries running more then 2mins in another csv log file. As i enabled
>> both it will be doing both in single file rt . Is there any way to split
>> both???
>
> This is not currently possible inside PostgreSQL, you will need to do
> external post-processing of the logfile to get that.

I bet it wouldn't be hard to mangle apache's log rotator to do
something like put all entries with DB1 at the start to one file and
all the others in another file.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Sam Mason
On Fri, Sep 25, 2009 at 11:01:02AM -0700, Ron Mayer wrote:
> Sam Mason wrote:
> > It all depends on the problem domain of course, but this seems to work
> > OK for us!  I really want to hack Samba around so that the users can
> > view the files directly from inside the database, but I'm not sure how
> > good an idea this really.
> 
> "hack Samba"?   Wouldn't it be easier to use one of the database-as-a
> filesystem FUSE bindings and run stock samba over that?

Huh, that would indeed be much easier.  I hadn't thought about this for
a while and Rob's post reminded me.  I don't think FUSE existed when I
started thinking about it and as all our clients are Windows boxes it
didn't matter at the time.

> The perl Fuse::DBI module's example  sounds pretty similar to the
> system you described where he "file" seems to be a column in a table.
> http://www.rot13.org/~dpavlin/fuse_dbi.html

FUSE looks pretty easy to get going and I think I'd want more control
over how files were presented than this gives so I'd probably end up
rolling my own code.  Thanks for pointing out that FUSE though, not sure
why I'd not thought of it before.  I'll probably still never get around
to it, but maybe I will!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FUNCTION taking a record; returning a record?

2009-09-25 Thread InterRob
Dear list,
I wrote a function that takes one parameter of a certain 'table type':

CREATE FUNCTION deserialize(serTable table1) RETURNS record AS ...

I whish to use this function in a query such as:

SELECT deser.* FROM deserialize(table1) deser(col1 integer, col2 integer),
table1 WHERE table1.field > [some_value]

Thus: have the function perform its operation on a SUBSET of table1,
RETURNING multiple fields.

I can't quite figure out how to put this together (other than passing the
table name and where-clause as TEXT)...

Any ideas? They are very much appreciated.

Regards,


Rob


Re: [GENERAL] lazy vacuum and AccessExclusiveLock

2009-09-25 Thread Alvaro Herrera
Jaromír Talíř wrote:
> Alvaro Herrera píše v Čt 24. 09. 2009 v 14:41 -0400:
> > Jaromír Talíř wrote:
> > 
> > > we are facing strange situation with exclusively locked table during
> > > normal lazy vacuum. There is one big table (66GB) that is heavily
> > > inserted and updated in our database. Suddenly (after backup and delete
> > > of almost all records) we are not able to run VACUUM over this table
> > > because after 50 minutes of work it allocate AccessExclusiveLock on this
> > > table and all other connections start to timeout.
> > 
> > What version are you running?
> 
> We are running 8.3.5 on Ubuntu LTS 8.04.

Okay, I was thinking on a fix that was applied before 8.3, so you're not being
bitten by that bug.  Yes, lazy vacuum does hold an exclusive lock: it does so
to be able to truncate the empty pages at the end of the table.  This lock is
acquired only if available (vacuum will skip truncating if the table cannot be
locked immediately), and it is supposed to last a short amount of time; namely
the time it takes to scan the table backwards to find out how many pages to
truncate.

I guess in your case the amount of time is not as short as all that :-(

The bug fixed was this one:

revision 1.92
date: 2007-09-10 13:58:45 -0400;  author: alvherre;  state: Exp;  lines: +6 -2;
Remove the vacuum_delay_point call in count_nondeletable_pages, because we hold
an exclusive lock on the table at this point, which we want to release as soon
as possible.  This is called in the phase of lazy vacuum where we truncate the
empty pages at the end of the table.

An alternative solution would be to lower the vacuum delay settings before
starting the truncating phase, but this doesn't work very well in autovacuum
due to the autobalancing code (which can cause other processes to change our
cost delay settings).  This case could be considered in the balancing code, but
it is simpler this way.


-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Low values for cached size

2009-09-25 Thread Carlos Henrique Reimer
Hi,

We're facing performance problems in a Linux box running CentOS release 5
(Final) and PostgreSQL 8.2.4. I've done some basic checks in the
configuration but everything looks fine to me. One weird behaviour I've
found is the cached size showed by the
"top" and "free" Linux commands:


top - 08:32:17 up 3 days, 19:04,  1 user,  load average: 1.09, 1.07, 1.10
Tasks: 173 total,   2 running, 170 sleeping,   0 stopped,   1 zombie
Cpu(s):  9.5%us,  0.5%sy,  0.0%ni, 88.2%id,  1.7%wa,  0.0%hi,  0.0%si,
0.0%st
Mem:   3631900k total,  3378056k used,   253844k free,25488k buffers
Swap:  4192956k total,  100k used,  4192856k free,  2356588k cached

[postg...@server01 etc]$ free
 total   used   free sharedbuffers cached
Mem:   36319003174804 457096  0  142802086184
-/+ buffers/cache:10743402557560
Swap:  41929561084192848
[postg...@server01 etc]$
Both commands show values ranging from 2GB to 2.3GB for the cached size and
the server has 3.5GB RAM. I do usally see  cached values with sizes bearing
the size of the RAM in other servers. It seams that something is consuming
the RAM and not letting it free to be used as cache for Linux files, right?
The shared_buffers (256MB) is not high and I can not see a reason for this.
Initially I've thought the problem was
because the system was running with runlevel 5, but now, it's running with
runlevel 3 and even so the values for
cached size does not change.

Any suggestions or directions I could follow to discover the reason?

Reimer

-- 
Reimer
47-3457-0881 47-9183-0547 msn: carlosrei...@hotmail.com
skype: carlosreimer


Re: [GENERAL] lazy vacuum and AccessExclusiveLock

2009-09-25 Thread Tom Lane
Alvaro Herrera  writes:
> An alternative solution would be to lower the vacuum delay settings before
> starting the truncating phase, but this doesn't work very well in autovacuum
> due to the autobalancing code (which can cause other processes to change our
> cost delay settings).  This case could be considered in the balancing code, 
> but
> it is simpler this way.

I don't think autovacuum has a problem --- if someone requests a
conflicting lock, autovac will get kicked off, no?  The OP's problem
comes from doing a manual vacuum.  Perhaps "don't do that" is a good
enough answer.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procedure: RETURNS record

2009-09-25 Thread Rob Marjot
Still no luck... To clarify a bit, take this example:

CREATE OR REPLACE FUNCTION transpose()
  RETURNS record AS
$BODY$ DECLARE
   output RECORD;
 BEGIN
   SELECT * INTO output FROM (VALUES(1,2)) as tbl(first,second);
   RETURN output;

 END;$BODY$
  LANGUAGE 'plpgsql' STABLE
  COST 100;

Now, I expect to have 2 columns; named "first" and "second". However, like
posted before, the flowing query:
SELECT * FROM deserialize();
produces only ONE column (in one row, as one would clearly expect from the
function's defnition):
 deserialize
---
 (1,2)
(1 row)


Any thoughts on how to make sure multiple columns are returned; without
specifying this in the function's prototype return clause?

Thanks,


Rob

2009/9/25 Alban Hertroys 

> On 25 Sep 2009, at 18:34, InterRob wrote:
>
>  Unfortunately, this results in ONE row, with ONE column. E.g.:
>>
>> MYDB=# select * from (SELECT deserialize(kvp) FROM kvp) ss;
>>  deserialize
>> ---
>>  (1,2)
>> (1 row)
>>
>> I guess I am seeking to prototype the anonymous row layout in the above
>> SQL statement?
>>
>
>
> I'm not entirely sure about the syntax in your case, but I think you're
> looking for:
>
> MYDB=# select * from (SELECT deserialize(kvp) FROM kvp) ss (a int, b int);
>
> If that doesn't work, it's based on how you normally select from a
> record-returning function, namely:
> MYDB=# SELECT * FROM deserialize('some string') AS ss (a int, b int);
>
> You may need to call it like this though:
> MYDB=# select * from (SELECT (deserialize(kvp)).a, (deserialize(kvp)).b
> FROM kvp) ss (a int, b int);
>
> In that case your function better not be volatile or it will be evaluated
> twice.
>
> Alban Hertroys
>
> --
> If you can't see the forest for the trees,
> cut the trees and you'll see there is no forest.
>
>
> !DSPAM:968,4abd04cd11681949045486!
>
>
>
>


[GENERAL] pg_restore ordering questions

2009-09-25 Thread Steve Crawford

Background:
I am preparing to collapse two databases in a cluster into a single 
database (current step in a roadmap preparing to update a system from 
7.4.6 up to 8.4.1).


In testing, I have a step that dumps the schema of one of the databases, 
creates an object list, removes the unwanted objects, then restores the 
schema into an empty database. This step is failing due to attempts to 
restore views/indexes prior to creation of the referenced table. I have 
stripped it down to the minimum and it still fails:


pg_dump -U postgres --schema-only -Fc thedb > thedb.dump
pg_restore -U postgres -d thedb thedb.dump

My overall testing has raised several questions:

1. Is this a known issue in 7.4.6? I've Googled and searched the docs 
but haven't yet located it if so.


2. What is the default order of the --list option? (I noted that the 
--rearrange and --list options can be simultaneously applied to 
pg_restore but the generated list is identical with/without --rearrange 
so is the default list order the same as --rearrange would create?) If 
used unchanged as the input to --use-list should the restore succeed?


3. It appears that if --use-list and --rearrange are both used, the 
order will be determined by --rearrange rather than the order of the 
list. Is this correct?


4. The recommended upgrade procedure is to use pg_dump from the newer 
PostgreSQL version. But is it safe to use an up-to-date 
pg_dump/pg_restore on an old version of PostgreSQL?


5. If this is a known issue in 7.4.6, is there a good workaround?

Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procedure: RETURNS record

2009-09-25 Thread Tom Lane
Rob Marjot  writes:
> Any thoughts on how to make sure multiple columns are returned; without
> specifying this in the function's prototype return clause?

If you want "SELECT * FROM" to expand to multiple columns, the names
and types of those columns *must* be available at parse time.  You
can either declare them in the function prototype, or you can supply
them in the function call, a la

select * from my_func(...) as x(a int, b int);

It will not work to hope that the parser can predict what the function
will do when executed.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore ordering questions

2009-09-25 Thread Tom Lane
Steve Crawford  writes:
> I am preparing to collapse two databases in a cluster into a single 
> database (current step in a roadmap preparing to update a system from 
> 7.4.6 up to 8.4.1).

> In testing, I have a step that dumps the schema of one of the databases, 
> creates an object list, removes the unwanted objects, then restores the 
> schema into an empty database. This step is failing due to attempts to 
> restore views/indexes prior to creation of the referenced table. I have 
> stripped it down to the minimum and it still fails:

I think that we didn't teach pg_dump about dependency ordering until 8.0
or so.  If you're unlucky enough to hit this in 7.4, you have to fix it
via manual reordering of the dump items.

> 4. The recommended upgrade procedure is to use pg_dump from the newer 
> PostgreSQL version. But is it safe to use an up-to-date 
> pg_dump/pg_restore on an old version of PostgreSQL?

No, the dump typically won't load into an older server, at least not
without some manual editing to correct uses of newer syntax.

Why don't you update to 8.4 first and then do your other housekeeping?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lazy vacuum and AccessExclusiveLock

2009-09-25 Thread Alvaro Herrera
Tom Lane wrote:
> Alvaro Herrera  writes:
> > An alternative solution would be to lower the vacuum delay settings before
> > starting the truncating phase, but this doesn't work very well in autovacuum
> > due to the autobalancing code (which can cause other processes to change our
> > cost delay settings).  This case could be considered in the balancing code, 
> > but
> > it is simpler this way.
> 
> I don't think autovacuum has a problem --- if someone requests a
> conflicting lock, autovac will get kicked off, no?  The OP's problem
> comes from doing a manual vacuum.  Perhaps "don't do that" is a good
> enough answer.

Hah, that was part of the commit message, which predates autovacuum
getting kicked out in case of conflicting locks IIRC.

I think the process being described is unusual enough that a manual
vacuum at just the right time is warranted ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Low values for cached size

2009-09-25 Thread Scott Marlowe
On Fri, Sep 25, 2009 at 3:28 PM, Carlos Henrique Reimer
 wrote:
> Hi,
>
> We're facing performance problems in a Linux box running CentOS release 5
> (Final) and PostgreSQL 8.2.4. I've done some basic checks in the
> configuration but everything looks fine to me. One weird behaviour I've
> found is the cached size showed by the
> "top" and "free" Linux commands:
>
> top - 08:32:17 up 3 days, 19:04,  1 user,  load average: 1.09, 1.07, 1.10
> Tasks: 173 total,   2 running, 170 sleeping,   0 stopped,   1 zombie
> Cpu(s):  9.5%us,  0.5%sy,  0.0%ni, 88.2%id,  1.7%wa,  0.0%hi,  0.0%si,
> 0.0%st
> Mem:   3631900k total,  3378056k used,   253844k free,    25488k buffers
> Swap:  4192956k total,  100k used,  4192856k free,  2356588k cached
>
> [postg...@server01 etc]$ free
>  total   used   free shared    buffers cached
> Mem:   3631900    3174804 457096  0  14280    2086184
> -/+ buffers/cache:    1074340    2557560
> Swap:  4192956    108    4192848
> [postg...@server01 etc]$
>
> Both commands show values ranging from 2GB to 2.3GB for the cached size and
> the server has 3.5GB RAM. I do usally see  cached values with sizes bearing
> the size of the RAM in other servers. It seams that something is consuming
> the RAM and not letting it free to be used as cache for Linux files, right?
> The shared_buffers (256MB) is not high and I can not see a reason for this.
> Initially I've thought the problem was
> because the system was running with runlevel 5, but now, it's running with
> runlevel 3 and even so the values for
> cached size does not change.
>
> Any suggestions or directions I could follow to discover the reason?

If you run top, then hit M, and post the first 20 or so rows after
what you have here I can take a guess.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Ron Mayer
Drifting off topic so I'm no longer ccing the lists.

Sam Mason wrote:
> 
>> The perl Fuse::DBI module's example  sounds pretty similar to the
>> system you described where he "file" seems to be a column in a table.
>> http://www.rot13.org/~dpavlin/fuse_dbi.html
> 
> FUSE looks pretty easy to get going and I think I'd want more control
> over how files were presented than this gives so I'd probably end up
> rolling my own code.  Thanks for pointing out that FUSE though, not sure

I FUSE really more the framework that wraps around your code.

There are applications using fuse that expose gmail as a filesystem.

Here's a simple example that uses FUSE to expose a
perl HASH and a few hello-world-like perl functions.
http://cpansearch.perl.org/src/NOSEYNICK/Fuse-Simple-1.00/README

> why I'd not thought of it before.  I'll probably still never get around
> to it, but maybe I will!

It's actually easy enough that I wouldn't be surprised if you
try it, and get it working just for fun even if noone uses it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Low values for cached size

2009-09-25 Thread Carlos Henrique Reimer
Hi Scott,

The top and M option:

top - 20:37:52 up  8:19,  2 users,  load average: 0.00, 0.00, 0.00
Tasks:  96 total,   1 running,  95 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.2%sy,  0.0%ni, 99.5%id,  0.3%wa,  0.0%hi,  0.0%si,
0.0%st
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 4011 postgres  18   0  298m 276m 251m S0  7.8   0:40.16 postgres
 2950 postgres  15   0  271m 232m 231m S0  6.6   0:01.06 postgres
 2938 postgres  18   0  271m 7120 6748 S0  0.2   0:00.31 postgres
 3012 root  18   0 10852 5644 1588 S0  0.2   0:00.01 miniserv.pl
 2660 root  15   0 13448 4616  968 S0  0.1   0:00.01 python
 2732 ntp   15   0  4316 4316 3312 S0  0.1   0:00.04 ntpd
 2397 root  12  -3 10100 3860 2272 S0  0.1   0:00.08 python
 2870 haldaemo  18   0  5700 3724 1608 S0  0.1   0:01.51 hald
 5917 root  18   0 10424 2804 1388 S0  0.1   0:00.00 httpd
 5992 root  15   0  9000 2724 2204 S0  0.1   0:00.03 sshd
 5140 root  16   0  8996 2720 2204 S0  0.1   0:00.04 sshd
 5918 apache23   0 10424 2084  632 S0  0.1   0:00.00 httpd
 5920 apache23   0 10424 2084  632 S0  0.1   0:00.00 httpd
 5921 apache23   0 10424 2084  632 S0  0.1   0:00.00 httpd
 5922 apache23   0 10424 2084  632 S0  0.1   0:00.00 httpd
 5923 apache25   0 10424 2084  632 S0  0.1   0:00.00 httpd
 5924 apache25   0 10424 2084  632 S0  0.1   0:00.00 httpd
 5925 apache25   0 10424 2084  632 S0  0.1   0:00.00 httpd
 5926 apache25   0 10424 2084  632 S0  0.1   0:00.00 httpd
 2696 root  18   0  9676 1968 1360 S0  0.1   0:00.00 cupsd
 2757 root  15   0  9028 1860  776 S0  0.1   0:00.00

Thank you!

sendmail2009/9/25 Scott Marlowe 

>  On Fri, Sep 25, 2009 at 3:28 PM, Carlos Henrique Reimer
>  wrote:
> > Hi,
> >
> > We're facing performance problems in a Linux box running CentOS release 5
> > (Final) and PostgreSQL 8.2.4. I've done some basic checks in the
> > configuration but everything looks fine to me. One weird behaviour I've
> > found is the cached size showed by the
> > "top" and "free" Linux commands:
> >
> > top - 08:32:17 up 3 days, 19:04,  1 user,  load average: 1.09, 1.07, 1.10
> > Tasks: 173 total,   2 running, 170 sleeping,   0 stopped,   1 zombie
> > Cpu(s):  9.5%us,  0.5%sy,  0.0%ni, 88.2%id,  1.7%wa,  0.0%hi,  0.0%si,
> > 0.0%st
> > Mem:   3631900k total,  3378056k used,   253844k free,25488k buffers
> > Swap:  4192956k total,  100k used,  4192856k free,  2356588k cached
> >
> > [postg...@server01 etc]$ free
> >  total   used   free sharedbuffers cached
> > Mem:   36319003174804 457096  0  142802086184
> > -/+ buffers/cache:10743402557560
> > Swap:  41929561084192848
> > [postg...@server01 etc]$
> >
> > Both commands show values ranging from 2GB to 2.3GB for the cached size
> and
> > the server has 3.5GB RAM. I do usally see  cached values with sizes
> bearing
> > the size of the RAM in other servers. It seams that something is
> consuming
> > the RAM and not letting it free to be used as cache for Linux files,
> right?
> > The shared_buffers (256MB) is not high and I can not see a reason for
> this.
> > Initially I've thought the problem was
> > because the system was running with runlevel 5, but now, it's running
> with
> > runlevel 3 and even so the values for
> > cached size does not change.
> >
> > Any suggestions or directions I could follow to discover the reason?
>
> If you run top, then hit M, and post the first 20 or so rows after
> what you have here I can take a guess.
>



-- 
Reimer
47-3457-0881 47-9183-0547 msn: carlosrei...@hotmail.com
skype: carlosreimer


Re: [GENERAL] stored procedure: RETURNS record

2009-09-25 Thread Merlin Moncure
On Fri, Sep 25, 2009 at 2:40 PM, Rob Marjot  wrote:
> Still no luck... To clarify a bit, take this example:
> CREATE OR REPLACE FUNCTION transpose()
>   RETURNS record AS
> $BODY$ DECLARE
>    output RECORD;
>  BEGIN
>    SELECT * INTO output FROM (VALUES(1,2)) as tbl(first,second);
>    RETURN output;
>  END;$BODY$
>   LANGUAGE 'plpgsql' STABLE
>   COST 100;
> Now, I expect to have 2 columns; named "first" and "second". However, like
> posted before, the flowing query:
> SELECT * FROM deserialize();
> produces only ONE column (in one row, as one would clearly expect from the
> function's defnition):
>  deserialize
> ---
>  (1,2)
> (1 row)
>
> Any thoughts on how to make sure multiple columns are returned; without
> specifying this in the function's prototype return clause?

In a sense, what you are asking is impossible. Having a function (even
a C one) return 'record' does not get you out of having to define the
output columns...either in the function definition with 'out' or as
part of the calling query.  Depending on what you are trying to do,
this could either matter a little or a lot.  If it matters, why don't
you post some more details of the context of your problem and see if a
better solution can be found?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't find SRPMs for PG 8.1.18 on RHEL4

2009-09-25 Thread Devrim GÜNDÜZ
On Fri, 2009-09-25 at 09:40 -0500, Justin Pasher wrote:
> 
> Our of curiosity, how do you actually get to that links from the web 
> site? I always seem to have trouble finding the link to the SRPMs
> tree.
> 

Tom complained about this a few days, too -- There is not a direct link
from website. However, if you have installed our repo rpms, yum repo
conf file has URL to the SRPMs.

Still, I will add links tomorrow.
-- 
Devrim GÜNDÜZ, RHCE
Command Prompt - http://www.CommandPrompt.com 
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
   http://www.gunduz.org


signature.asc
Description: This is a digitally signed message part


[GENERAL] The password specified does not meet the local or domain policy during install.

2009-09-25 Thread David Chell
I am unable to install postgresql on a Windows XP machine and am wondering if 
anyone has struck this problem or might have an idea how to solve it.

This is a clean install, I've never installed postgresql on the is machine 
before, I have successfully installed it on another machine which is a Windows 
Server 2008 x64, but whenever I try to install it in the Windows XP (32) laptop 
I get the following error after entering the superuser/service password:

The password specified does not meet the local or domain policy. Check the 
minimum length, password complexity and password history requirements.

I've tried many combinations of passwords such as p05gr35ql.

My password policy is as follows:

Enforce history:3 passwords
Maximum age:  42 days
Minimum age:   0 days
Minimum length: 6 characters
Password must meet complexity requirements: disabled
Store password using reversible encryption: disabled


Thanks for your help.
David



Re: [GENERAL] generic modelling of data models; enforcing constraints dynamically...

2009-09-25 Thread Erik Jones


On Sep 24, 2009, at 2:07 PM, InterRob wrote:


I guess it IS quite overengineered indeed...

What I'm trying to do is to facilitate different fieldwork  
methodologies for archaeological research (on project basis); there  
is no final agreement on data structure and semantics; however, on a  
meta-level all choices are rational and can be modelled... Infact,  
all models can be related to each other: that's where the "hybrid"  
part comes in: I wish to implement the common denominator (90%) and  
then further extend this, enabing specific data model  
implementations -- including checks for data integrity.


Have you considered a non-relational, "schema-less" database such as  
MongoDB or Cassandra?  You're pretty much throwing out the relational  
features of this database anyways so it seems that it would make sense  
to use something more geared to that kind of work.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PQgetvalue() question

2009-09-25 Thread ayen . krida
Hi friends,
Can anyone help me with this?

In what encoding the function PQgetvalue() return?
ASCII? UTF8?

Thank you very very much.

Rgds,
Ayen Yang
Sent from my AXIS Worry Free BlackBerry?0?3 smartphone
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general