Re: [GENERAL] Psql Internal Variable question

2011-06-01 Thread Pavel Stehule
Hello

\set removes outer quotes

if you use PostgreSQL 9.0 you can use

\set xxx 'Pavel ''Stěhule'

postgres=# \set
AUTOCOMMIT = 'on'
PROMPT1 = '%/%R%# '
PROMPT2 = '%/%R%# '
PROMPT3 = '>> '
VERBOSITY = 'default'
VERSION = 'PostgreSQL 9.1beta1 on x86_64-unknown-linux-gnu, compiled
by gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4), 64-bit'
DBNAME = 'postgres'
USER = 'pavel'
PORT = '5432'
ENCODING = 'UTF8'
EDITOR_LINENUMBER_SWITCH = '+'
HISTSIZE = '6000'
HISTFILE = '~/.psql_history-5432'
xxx = 'Pavel 'Stěhule'

postgres=# \echo :'xxx'

'Pavel ''Stěhule'

Regards

Pavel

2011/6/1 Prafulla Tekawade :
> Hi,
>
> I am trying to use psql \set and \echo commands to set some internal
> variables.
> My variable starts with single quote and ends with single quote.
> That is actual value of the variable.
>
> Eg.
> set cur_db 'pgdb'
> \echo :cur_db
> --I am expecting here 'pgdb' but psql shows pgdb
>
> As mentioned above, I am expecting \echo to return 'pgdb' to me but it
> returns pgdb.
> I tried using escape char, but it did not work.
> Can somebody help me?
>
>
>
> --
> Thanks
> Prafulla
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need suggestion

2011-06-01 Thread Carl von Clausewitz
Hello Everyone,

I got a new project, with 100 user in Europe. In this case, I need to handle
production and sales processes an its documentations in PostgreSQL with PHP.
The load of the sales process is negligible, but every user produces 2
transaction in the production process, with 10-30 scanned documents (each
are 400kb - 800kb), and 30-50 high resolution pictures (each are 3-8 MB),
and they wanted to upload it to 'somewhere'. 'Somewhere' could be the server
files system, and a link in the PostgreSQL database for the location of the
files (with some metadata), or it could be the PostgreSQL database.

My question is that: what is your opinion about to store the scanned
documentation and the pictures in the database? This is a huge amount of
data (between daily 188MB and 800MB data, average year is about 1 TB data),
but is must be searchable, and any document must be retrieved within 1 hour.
Every documentations must be stored for up to 5 years... It means the
database could be about 6-7 TB large after 5 years, and then we can start to
archive documents. Any other data size is negligible.

If you suggest, to store all of the data in PostgreSQL, what is your
recommendation about table, index structure, clustering, archiving?

Thank you in advance!
Regards,
Carl


[GENERAL] Returning from insert on view

2011-06-01 Thread Aleksey Chirkin
Hello!

I need your advice.
My problem is to ensure that the right returning from insert on the view.

For example, I have two tables:

CREATE TABLE country (id serial, nm text);
CREATE TABLE city (id serial, country_id integer, nm text);

And one view on table "city", which join table "county" and adds
country_nm column.

CREATE VIEW city_view AS
  SELECT city.id, city.nm, city.country_id, country.nm AS country_nm
FROM city
JOIN country ON city.country_id = country.id;

I have two options for ensuring the returning from insert operation on view:

1) Create rule:

CREATE RULE ins AS ON INSERT
  TO city_view
  DO INSTEAD
  INSERT INTO city (nm, country_id) VALUES (NEW.nm, NEW.country_id)
RETURNING id, nm, country_id,
  (SELECT nm FROM country WHERE id = city.country_id) AS country_nm;

2) Create trigger on view (for PostgreSQL 9.1):

CREATE FUNCTION city_view_insert()
  RETURNS trigger AS
$BODY$
BEGIN
  INSERT INTO city
( nm, country_id )
VALUES ( NEW.nm, NEW.country_id )
RETURNING id INTO NEW.id;

  SELECT * INTO NEW FROM city_view WHERE id = NEW.id;

  RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE STRICT SECURITY DEFINER;

CREATE TRIGGER on_insert INSTEAD OF INSERT  ON city_view
  FOR EACH ROW
  EXECUTE PROCEDURE city_view_insert();

It looks like a trick, and slows the insert, but it looks better and
avoids the complexities in returning.

Perhaps there is another way (may be a trick) to ensure the returning
from the insert on the view, without a manual compilation of the
returning columns?

Regards,
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Consistency of distributed transactions

2011-06-01 Thread Pete Chown

Craig Ringer wrote:


Distributed transactions will give you atomicity if done right - with
two-phase commit (2PC) - but AFAIK will *NOT* give you consistency
across the databases in question.


That's useful to know -- thanks.  At least now I know my idea won't 
work, so I can forget about it and try to think of something else. :-)


Pete


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Mixed up protocol packets in server response?

2011-06-01 Thread Michal Politowski
I have an instance of what looks like a weird mixup of data returned by the 
server.
This is observed in a Java application using the 9.0-801.jdbc4 JDBC driver 
connecting to an 8.3.4 server (yes, it's old) running on Solaris.

The application hung waiting for the result of one
select * from a_table where a_field = parameter;

Looking at it with a debugger I see that:
1. it waits in:
SocketInputStream.socketRead0(FileDescriptor, byte[], int, int, int) 
line: not available [native method]
SocketInputStream.read(byte[], int, int) line: 129  
VisibleBufferedInputStream.read(byte[], int, int) line: 219 
PGStream.Receive(byte[], int, int) line: 460
PGStream.ReceiveTupleV3() line: 365 
QueryExecutorImpl.processResults(ResultHandler, int) line: 1814 
QueryExecutorImpl.execute(Query, ParameterList, ResultHandler, int, 
int, int) line: 257 
Jdbc4PreparedStatement(AbstractJdbc2Statement).execute(Query, 
ParameterList, int) line: 500 
Jdbc4PreparedStatement(AbstractJdbc2Statement).executeWithFlags(int) 
line: 388  
Jdbc4PreparedStatement(AbstractJdbc2Statement).executeQuery() line: 273 
because it expects a field of size = 825767394 (l_size in ReceiveTupleV3)
which is much more than the whole result should be.

2. then looking at the answer array in ReceiveTupleV3:
A few first fields have the expected values,
then answer[3] has the expected
size but somewhere in the middle it changes from field 3 of the first row
of the result to the middle of field 3 of the last row of the result.
Then of course there are 4 bytes of this field that as a 32-bit integer
have the value of 825767394.
Then answer[4] obviously has 825767394 elements. And it continues with the
field 3 of the last row and then it contains the 4-byte length and then
value of the field 4 (last) of the last row of the result - so the end of a
DataRow message.
Then a CommandComplete message with the tag "SELECT",
then a ReadyForQuery message with the status indicator "T".
And then there is the rest of the field 3 of the first row, then length and
value for the field 4 of the first row and then the DataRow messages for
the following rows of the result. The last one up until the place where 
it got mixed up before.

So it looks like the server wanted to send
D row 1, D row 2, D row 3, D row 4, C SELECT, Z T
but the application sees
D ro, ow 4, C SELECT, Z T, w 1, D row 2, D row 3, D r

What may be the cause of this weird problem? Is it some known or unknown bug in
8.3.4 or is the application/Java side more suspected?

-- 
Michał Politowski
Talking has been known to lead to communication if practiced carelessly.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] troubles with initdb

2011-06-01 Thread jlhgis
Great eyes!

It was copied & pasted out of some running notes & screenshots of the
process I was keeping in an OO file. You're a lot more "helpful" than
it was :)  Thanks so much for taking the time to look at this.

Now, on to the next hurdle - getting pg_ctl to run as an automated service...


> Aaaah... your typeface just gave me a clue.
> It's impossible for me to tell if the issue above
> is just your mail client being "helpful" or if it was
> present in the original command line, but look at
> the hyphens before "locale=C".
>
> I bet you wrote your original command line in a word
> processor like MS Word, or copied and pasted it via
> there at some point. Maybe even Outlook does it.
> Either way, that program was "helping" you.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mixed up protocol packets in server response?

2011-06-01 Thread Tom Lane
Michal Politowski  writes:
> 2. then looking at the answer array in ReceiveTupleV3:
> ...
> So it looks like the server wanted to send
> D row 1, D row 2, D row 3, D row 4, C SELECT, Z T
> but the application sees
> D ro, ow 4, C SELECT, Z T, w 1, D row 2, D row 3, D r

> What may be the cause of this weird problem? Is it some known or unknown bug 
> in
> 8.3.4 or is the application/Java side more suspected?

It's just about impossible to believe that the server sent that, because
(1) it generates and sends those messages one at a time, and (2) nothing
of the sort has ever been reported before.  My money is on some foulup
in buffer-wrangling on the client side.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] currval = currval+1

2011-06-01 Thread salah jubeh
I have the following SQL statements

BEGIN;
-- account_id is a sequence
INSERT INTO account (name) VALUES ('test customer'||random()::text);
-- account_id is a foreign key
INSERT INTO account_detail (account_id,..)  VALUES ((SELECT * from 
currval('account_acccount_id_seq')), );
COMMIT;

ERROR:  insert or update on table "account_detail" violates foreign key 
constraint ...



I have executed the above without a transaction but in the same session, and 
the 
issue was that the current value of the account points to the curval + 1

I have solved the above by subtracting 1 in the second insert statement. Why I 
am having this problem, I have used current value many time and this is the 
first time I have this behavior.

Regards

Re: [GENERAL] currval = currval+1

2011-06-01 Thread Merlin Moncure
On Wed, Jun 1, 2011 at 9:27 AM, salah jubeh  wrote:
> I have the following SQL statements
>
> BEGIN;
> -- account_id is a sequence
> INSERT INTO account (name) VALUES ('test customer'||random()::text);
> -- account_id is a foreign key
> INSERT INTO account_detail (account_id,..)  VALUES ((SELECT * from
> currval('account_acccount_id_seq')), );
> COMMIT;
>
> ERROR:  insert or update on table "account_detail" violates foreign key
> constraint ...
>
>
>
> I have executed the above without a transaction but in the same session, and
> the issue was that the current value of the account points to the curval + 1
>
> I have solved the above by subtracting 1 in the second insert statement. Why
> I am having this problem, I have used current value many time and this is
> the first time I have this behavior.


works for me -- are you sure you don't have a trigger or something
else going on behind the scenes?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] currval = currval+1

2011-06-01 Thread salah jubeh
I have some rules on the table and I have dropped them and everything went 
fine. 
the rule is as follow

CREATE OR REPLACE RULE status_change_ins AS
ON INSERT TO account  DO  INSERT INTO account_status_change_log 
(account_id, 
account_status_id, status_change_date) 

  VALUES (new.account_id, new.account_status_id, now());

I do not know what is happing here, but this is a strange behavior. 

Regards




 





From: Merlin Moncure 
To: salah jubeh 
Cc: pgsql-general@postgresql.org
Sent: Wed, June 1, 2011 4:54:36 PM
Subject: Re: [GENERAL] currval = currval+1

On Wed, Jun 1, 2011 at 9:27 AM, salah jubeh  wrote:
> I have the following SQL statements
>
> BEGIN;
> -- account_id is a sequence
> INSERT INTO account (name) VALUES ('test customer'||random()::text);
> -- account_id is a foreign key
> INSERT INTO account_detail (account_id,..)  VALUES ((SELECT * from
> currval('account_acccount_id_seq')), );
> COMMIT;
>
> ERROR:  insert or update on table "account_detail" violates foreign key
> constraint ...
>
>
>
> I have executed the above without a transaction but in the same session, and
> the issue was that the current value of the account points to the curval + 1
>
> I have solved the above by subtracting 1 in the second insert statement. Why
> I am having this problem, I have used current value many time and this is
> the first time I have this behavior.


works for me -- are you sure you don't have a trigger or something
else going on behind the scenes?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] currval = currval+1

2011-06-01 Thread Merlin Moncure
On Wed, Jun 1, 2011 at 10:22 AM, salah jubeh  wrote:
> I have some rules on the table and I have dropped them and everything went
> fine. the rule is as follow
>
> CREATE OR REPLACE RULE status_change_ins AS
>     ON INSERT TO account  DO  INSERT INTO account_status_change_log
> (account_id, account_status_id, status_change_date)
>   VALUES (new.account_id, new.account_status_id, now());
>
> I do not know what is happing here, but this is a strange behavior.

'rules suck' is the problem :(.  uncontrollable re-execution of
volatile functions is just *one* issue with them. good news: 9.1
completely fixes this with view update triggers. In the meantime,
consider dropping the update rule and making a functions which does
the delete and updates the log, or doing it in the client.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] currval = currval+1

2011-06-01 Thread Tom Lane
salah jubeh  writes:
> I have some rules on the table and I have dropped them and everything went 
> fine. 

Rules are macros, and have the usual issues with multiple evaluations of
multiply-referenced arguments.

> CREATE OR REPLACE RULE status_change_ins AS
> ON INSERT TO account  DO  INSERT INTO account_status_change_log 
> (account_id, 
> account_status_id, status_change_date) 
>   VALUES (new.account_id, new.account_status_id, now());

You'd be far better off doing that with a trigger.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Some clarification about TIMESTAMP

2011-06-01 Thread Steve Crawford

On 05/31/2011 09:45 AM, hernan gonzalez wrote:

I'm doing some tests with date-time related fields to design my web
application.
I was already  dissatisfied with Postgresql handling of timezones
concepts (issue
already discussed here - not entirely PG's fault, rather a SQL thing)
and I vehemently
reject the idea of a global server-side timezone configuration having
any infuence on
my DB layer, so I am planning to use always plain TIMESTAMP data tipe
(with no TIMEZONE).

What I want is that a TIMESTAMP field to be conceptually equivalent to
a plain {YEAR,MONTH,DAY HH,MM,SS},
data-tuple, i.e. a "local time". To clarifiy, for me "local time" =
"timezone unkown". Which is is NOT the same
as assuming some default (OS or database) timezone. It might very well
happen that I store in a -say- ALARM_TIME two datetimes
that correspond to users that have different ("local") timezones. So,
I want '2011-05-31 10:00:00'  in this field
to mean 10.00 AM in some UNKNOWN timezone (that of a particular user).

In this scenario, I assumed the natural convention is: store just a
UTC time, using a TIMESTAMP. I believe that's the idea
of a plain TIMESTAMP.

However, I'm not sure if I can get a totally timezone-indepent behaviour:
All is well - everything is under your control. It just takes a bit of 
time to understand how time calculations work.

CREATE TABLE t1 (  ts timestamp without time zone);
db=# insert into t1 values('1970-01-01 00:00:00');
INSERT 0 1
db=# select ts,extract(epoch from ts) from t1;
  ts  | date_part
-+---
  1970-01-01 00:00:00 | 21600

I was dismayed to see this, I assumed that my insert has stored a unix
timestamp = 0.
It seems not?


Understand that you have basically requested, whether you realized or 
not, "at what instant, displayed as a UNIX epoch, will it be midnight 
January 1, 1970 in this time zone". You, as the programmer, have 
complete control over what time zone is used for such conversions and 
PostgreSQL will do the work for you.


If you reverse the calculation, you will see that indeed it is midnight 
January 1, 1970 in your location:


select abstime(21600);
abstime

 1970-01-01 00:00:00-06

Were you in England:

set timezone to 'UTC';
SET
select ts,extract(epoch from ts) from t1;
 ts  | date_part
-+---
 1970-01-01 00:00:00 | 0

Note: Most calculations that ask for a timestamp *without* time zone at 
a specific time zone return a timestamp *with* time zone - you are 
essentially adding timezone information to get a point in time. 
Conversely, most calculations that ask for a timestamp *with* time zone 
(point in time) at a specific zone return a timestamp *without* time 
zone - you told it the time zone so you must only need the time stamp.



But on the other side, if I modify the server timezone what gets
changed is the epoch calculation!

asdas=# SET TIMEZONE TO 'XXX11';
SET
asdas=# select ts,extract(epoch from ts) from t1;
  ts  | date_part
-+---
  1970-01-01 00:00:00 | 39600

Why? What is happening here?



See above re: the calculation but note that you are not modifying the 
server, you are telling the server the time zone to use for date/time 
calculations for this client - no other client is affected.


So choose the data type that is appropriate for your app. If you need to 
represent specific points in time (the shuttle launched at..., the 
teleconference starts at...) use a timestamp with time zone. For data 
that is more "time of day" relevant (lunch is served at 11:30) use a 
timestamp without time zone.


Before condemning PostgreSQL's date/time handling, read and reread the 
section on date and time data-types and calculations. They are very 
powerful, useful and logical once understood.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] proper regex_replace() syntax

2011-06-01 Thread Geoffrey Myers

I want to use regex_replace() to replace characters in multiple records.

What I would like to do is this:

select regex_replace((select fname from table), 'z', 'Z'));


The problem is, the subquery returns more then one row.

So, is there a way to do what I'm trying to do?  That is, replace the 
same character in multiple records using regex_replace() ?


In reality, we are trying to change characters like the 1/2 character to 
the three characters '1/2'.


Thanks for any assistance.


--
Until later, Geoffrey

"I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them."
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How do I repair a corrupted system table in PostgreSQL?

2011-06-01 Thread David Hamilton
I just attempted to do a pg_upgrade on a fairly large PostgreSQL database
cluster from version 8.3.0 to version 9.0.4. Everything looked like it was
going to work just fine until the new schema was being created on the target
cluster. It died trying to create a group role twice for some reason.

After looking at all the scripts, it was quite obvious that it was
duplicating a group role 4 times. I brought the 8.3.0 database back up and
it was very apparent that there was a row repeated in the pg_authidtable.

I tried bringing the database up in single-user mode in order to try to REINDEX
TABLE pg_authid. This failed when trying to create the new index with
duplicated values. I tried deleting the offending group role. This removed
one of the 4 rows in pg_authid, but just seemed to confuse things further.

I saw mention that running a full vacuum on the table may repair such
corruption, but I have little hope of that working. So, while the data
restores, I'll fish for ideas.


Re: [GENERAL] proper regex_replace() syntax

2011-06-01 Thread Ken Tanzer
I think this is the syntax you want:

SELECT regexp_replace(fname,'z','Z') FROM table;

On Wed, Jun 1, 2011 at 10:22 AM, Geoffrey Myers  wrote:

> I want to use regex_replace() to replace characters in multiple records.
>
> What I would like to do is this:
>
> select regex_replace((select fname from table), 'z', 'Z'));
>
>
> The problem is, the subquery returns more then one row.
>
> So, is there a way to do what I'm trying to do?  That is, replace the same
> character in multiple records using regex_replace() ?
>
> In reality, we are trying to change characters like the 1/2 character to
> the three characters '1/2'.
>
> Thanks for any assistance.
>
>
> --
> Until later, Geoffrey
>
> "I predict future happiness for America if they can prevent
> the government from wasting the labors of the people under
> the pretense of taking care of them."
> - Thomas Jefferson
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] proper regex_replace() syntax

2011-06-01 Thread Rick Genter
On Wed, Jun 1, 2011 at 10:22 AM, Geoffrey Myers  wrote:

> I want to use regex_replace() to replace characters in multiple records.
>
> What I would like to do is this:
>
> select regex_replace((select fname from table), 'z', 'Z'));
>
>
> The problem is, the subquery returns more then one row.
>
> So, is there a way to do what I'm trying to do?  That is, replace the same
> character in multiple records using regex_replace() ?
>

I think what you want is:

SELECT regex_replace(fname, 'z', 'Z') FROM table;

This should return a recordset where each row has one column which is the
result of regex_replace() on the corresponding row of table.
-- 
Rick Genter
rick.gen...@gmail.com


[GENERAL] SELECT to_timestamp crash PostgreSQL 9.1beta1

2011-06-01 Thread Paolo Saudin
Trying to convert unix time to date time format, I encountered a database
crash.

Environment  : WINDOWS 7 Professional - Service Pack1

--
-- PostgreSQL 8.4.7, compiled by Visual C++ build 1400, 32-bit 
--
SELECT to_timestamp(1306760400);
2011-05-30 15:00:00+02

--
-- PostgreSQL 9.1beta1, compiled by Visual C++ build 1500, 32-bit
--
SELECT to_timestamp(1306760400);
** Errore **
--
SELECT version();
** Errore **
no connection to the server
--
Trying to re-click on the connection in the pgAdminIII
server closed the connection unexpectedly This probably means the server
terminated abnormally before or while processing the request.


Here are the log files :

FILE : postgresql-2011-06-01_183350.log
2011-06-01 18:33:51 CEST LOG:  database system was shut down at 2011-06-01
06:24:09 CEST
2011-06-01 18:33:51 CEST FATAL:  the database system is starting up
2011-06-01 18:33:51 CEST LOG:  database system is ready to accept
connections
2011-06-01 18:33:51 CEST LOG:  autovacuum launcher started
2011-06-01 21:01:01 CEST LOG:  server process (PID 3552) was terminated by
exception 0xC005
2011-06-01 21:01:01 CEST HINT:  See C include file "ntstatus.h" for a
description of the hexadecimal value.
2011-06-01 21:01:01 CEST LOG:  terminating any other active server processes
2011-06-01 21:01:01 CEST WARNING:  terminating connection because of crash
of another server process
2011-06-01 21:01:01 CEST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-06-01 21:01:01 CEST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-06-01 21:01:01 CEST WARNING:  terminating connection because of crash
of another server process
2011-06-01 21:01:01 CEST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-06-01 21:01:01 CEST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-06-01 21:01:01 CEST LOG:  all server processes terminated;
reinitializing
2011-06-01 21:01:11 CEST FATAL:  pre-existing shared memory block is still
in use
2011-06-01 21:01:11 CEST HINT:  Check if there are any old server processes
still running, and terminate them.


FILE : postgresql-2011-06-01_210902.log
2011-06-01 21:09:02 CEST FATAL:  the database system is starting up
2011-06-01 21:09:03 CEST LOG:  database system was interrupted; last known
up at 2011-06-01 18:33:51 CEST
2011-06-01 21:09:03 CEST LOG:  database system was not properly shut down;
automatic recovery in progress
2011-06-01 21:09:03 CEST LOG:  consistent recovery state reached at
0/16E2DA0
2011-06-01 21:09:03 CEST LOG:  record with zero length at 0/16E2DA0
2011-06-01 21:09:03 CEST LOG:  redo is not required
2011-06-01 21:09:04 CEST FATAL:  the database system is starting up
2011-06-01 21:09:04 CEST LOG:  database system is ready to accept
connections
2011-06-01 21:09:04 CEST LOG:  autovacuum launcher started

Thanks,
paolo saudin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Column aliases in having or where for Postgres 8.3

2011-06-01 Thread Prafulla Tekawade
Hi,

Anyone knows if I can use column aliases in having condition?
Something like this.


pgdb=# select sum(c_acctbal) as p from customer having p >
1000;

select sum(c_acctbal) as p from customer having p >
1000;

ERROR:  column "p" does not
exist

LINE 1: select sum(c_acctbal) as p from customer having p >
1000;

^

-- 
Best Regards,
Prafulla V Tekawade


[GENERAL] Table with active and historical data

2011-06-01 Thread Robert James
I have a table with a little active data and a lot of historical data.
 I'd like to be able to access the active data very quickly - quicker
than an index.  Here are the details:

1. Table has about 1 million records
2. Has a column active_date - on a given date, only about 1% are
active.  active_date is indexed and clustered on.
3. Many of my queries are WHERE active_date = today.  Postgres uses
the index for these, but still lakes quite a lot of time.  I repeat
these queries regularly.
4. I'd like to somehow presort or partition the data so that Postgres
doesn't have to do an index scan each time.  I'm not sure how to do
this? Idea?  I know it can be done with inheritance and triggers (
http://stackoverflow.com/questions/994882/what-is-a-good-way-to-horizontal-shard-in-postgresql
), but that method looks a little too complex for me.  I'm looking for
something simple.
5. Finally, I should point out that I still do a large number of
queries on historical data as well.

What do you recommend? Ideas? Also: Why doesn't cluster on active_date
solve the problem? Specifically, if I run SELECT * FROM full_table
WHERE active_date = today, I get a cost of 3500.  If I first select
those records into a new table, and then do SELECT * on the new table,
I get a cost of 64.  Why is that? Why doesn't clustering pregroup
them?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mixed up protocol packets in server response?

2011-06-01 Thread Craig Ringer

On 1/06/2011 9:06 PM, Michal Politowski wrote:


What may be the cause of this weird problem? Is it some known or unknown bug in
8.3.4 or is the application/Java side more suspected?


It'd be really helpful if you could collect and examine a trace of the 
client/server communication using WireShark. That way you can confirm 
whether it is (as Tom suspects) the client side mangling its buffers, or 
whether the server really did send the nonsensical sequence.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table with active and historical data

2011-06-01 Thread Merlin Moncure
On Wed, Jun 1, 2011 at 7:30 PM, Robert James  wrote:
> I have a table with a little active data and a lot of historical data.
>  I'd like to be able to access the active data very quickly - quicker
> than an index.  Here are the details:
>
> 1. Table has about 1 million records
> 2. Has a column active_date - on a given date, only about 1% are
> active.  active_date is indexed and clustered on.
> 3. Many of my queries are WHERE active_date = today.  Postgres uses
> the index for these, but still lakes quite a lot of time.  I repeat
> these queries regularly.

can we see a query and its 'explain analyze' that you think takes a lot of time?

> 4. I'd like to somehow presort or partition the data so that Postgres
> doesn't have to do an index scan each time.  I'm not sure how to do
> this? Idea?  I know it can be done with inheritance and triggers (
> http://stackoverflow.com/questions/994882/what-is-a-good-way-to-horizontal-shard-in-postgresql
> ), but that method looks a little too complex for me.  I'm looking for
> something simple.

an index scan should be good enough, but if it isn't you can look at
partitioning. let's make sure that's really necessary before doing it
however.

> 5. Finally, I should point out that I still do a large number of
> queries on historical data as well.
>
> What do you recommend? Ideas? Also: Why doesn't cluster on active_date
> solve the problem? Specifically, if I run SELECT * FROM full_table
> WHERE active_date = today, I get a cost of 3500.  If I first select
> those records into a new table, and then do SELECT * on the new table,
> I get a cost of 64.  Why is that? Why doesn't clustering pregroup
> them?

clustering is a tool that allows you to control which tuples are
grouped together on pages -- if you are pulling up more than one tuple
a time hopefully you can reduce the total number of pages you have to
scan by doing it.  The bigger the table is, the more that matters.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT to_timestamp crash PostgreSQL 9.1beta1

2011-06-01 Thread Craig Ringer

On 06/02/2011 03:15 AM, Paolo Saudin wrote:

Trying to convert unix time to date time format, I encountered a database
crash.


This is a known issue, and will be addressed in the next beta. Detail 
from an earlier post by Tom Lane:


Tom Lane wrote:
> This is the known problem with timezone abbreviations not being
> initialized correctly on Windows --- anything involving interpreting a
> "keyword" in datetime input will go belly up.  See commits 2e82d0b39,
> e05b86644.
>
> regards, tom lane

Those commits have a more detailed explanation, and can be found here:

http://git.postgresql.org/gitweb?p=postgresql.git;a=commit;h=2e82d0b396473b595a30f68b37b8dfd41c37dff8

http://git.postgresql.org/gitweb?p=postgresql.git;a=commit;h=e05b866447899211a0c2df31bf0671faac4fc3e5

but the short version appears to be "Fixed in the next version".

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] DBD::PG and long running queries and tcp stack timeout

2011-06-01 Thread Clemens Schwaighofer
Hi,

I have a script that runs a query on a remote server. The query will
take quite some time to finish.
Now the problem is that the tcp stack will timeout before the query is finished.

I am running the query as async and have a loop where I query the
pg_ready status every 5 seconds.

Is there anyway to send some NOOP or anything so this connection does
not timeout? Just reading pg_ready seems to do nothing, probably
because it gets filled once the query is done.

Running the query as not async has the same issues with timeout.

-- 
★ Clemens 呉 Schwaighofer
★ IT Engineer/Web Producer/Planning
★ E-Graphics Communications SP Digital
★ 6-17-2 Ginza Chuo-ku, Tokyo 104-8167, JAPAN
★ Tel: +81-(0)3-3545-7706
★ Fax: +81-(0)3-3545-7343
★ http://www.e-gra.co.jp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general