[GENERAL] PostgreSQL and Java on WindowsXP

2008-03-31 Thread Vismaster46
Hello!

I need to install on my local Windows XP machine PostgreSQL to test
some Java applications.

The trouble is that I cannot connect to PostgreSQL via Java
application, because the connection fails every time...what's the
problem? I have not installed PL/Java package cause my Postgre
installer disabled this option...it is the matter???

Thank you in advance.

Bye!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL and Java on WindowsXP

2008-03-31 Thread Damian Carey
On Mon, Mar 31, 2008 at 8:12 AM, Vismaster46 <[EMAIL PROTECTED]> wrote:
>  I need to install on my local Windows XP machine PostgreSQL to test
>  The trouble is that I cannot connect to PostgreSQL via Java
>  application, because the connection fails every time...what's the
>  problem?

Sorry, but you have not provided any clues!
- What connection is failing? Any error messages / stack traces available?
- Can you connect to the DB via (say) pgAdminIII ???
- How have you set it all up???
-How is it set up when not on XP??? (I guess it works there?)

We use PG / Java / XP every day and it works like a charm.  We use Hibernate.

>I have not installed PL/Java package cause my Postgre
>  installer disabled this option...it is the matter???
No - nothing to do with it.

Cheer,
-Damian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL and Java on WindowsXP

2008-03-31 Thread Albe Laurenz
Vismaster46 wrote:
> I need to install on my local Windows XP machine PostgreSQL to test
> some Java applications.
> 
> The trouble is that I cannot connect to PostgreSQL via Java
> application, because the connection fails every time...what's the
> problem? I have not installed PL/Java package cause my Postgre
> installer disabled this option...it is the matter???

What statement do you use to connect?
Have you verified that the server is up and running?
To what value did you set the parameter "listen_addresses" in postgresql.conf?
What is the error message?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL and Java on WindowsXP

2008-03-31 Thread josep porres
maybe you don't need PL/Java...

You are trying to connect using JDBC, right?
Take a look at http://jdbc.postgresql.org/doc.html
Which errors do you get?
Is the server listening on standard port?

anyway, you have to provide more information.

Josep

2008/3/30, Vismaster46 <[EMAIL PROTECTED]>:
>
> Hello!
>
> I need to install on my local Windows XP machine PostgreSQL to test
> some Java applications.
>
> The trouble is that I cannot connect to PostgreSQL via Java
> application, because the connection fails every time...what's the
> problem? I have not installed PL/Java package cause my Postgre
> installer disabled this option...it is the matter???
>
> Thank you in advance.
>
> Bye!
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] shortcut for select * where id in (arrayvar)

2008-03-31 Thread Ivan Sergio Borgonovo
On Sun, 30 Mar 2008 21:40:52 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:

> Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> > for _row in
> > select err, msg from errortable where err in (errcode)
> 
> > where errcode is an array.
> > That syntax doesn't work...

> In recent PG releases it will work as "WHERE err = ANY (errcode)"
> but note that there is *no* guarantee that the select will deliver
> the rows in the same order the array elements are in.


That's exactly what I was looking for.

array_to_string is not as type safe as ANY and I didn't check how it
may behave in a situation similar to:

select * from array_to_string(ARRAY['ciao','pota\'z'],',');

What I came up is

create table errors (errcode int, errmsg varchar(255));
insert into errors values(1,'ciao1');
insert into errors values(2,'ciao2');
insert into errors values(3,'ciao3');
insert into errors values(4,'ciao4');


create or replace function auz(out _errcode int, out _errmsg text)
returns setof record as $$
declare
__errcode int[];
_row record;
begin
-- these should be function calls
-- eg. __errcode[1]:=somechk(param1, param2);
__errcode[1]:=1;
__errcode[2]:=3;
for _row in
  select errcode, errmsg
   from errors where errcode = any (__errcode) loop
_errcode:=_row.errcode;
_errmsg:=_row.errmsg;
return next;
end loop;
return;
end;
$$ language plpgsql;

I'm still thinking if this should be the way to report a list of
failed tests.

Maybe I could just hard code the error message in the checking
function.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema design question

2008-03-31 Thread Ron Mayer

Ben wrote:
I'm working on a project which requires me to keep track of objects, 
each of which can have an arbitrary number of attributes. Although there 
will be many attributes that an object can have,...
Anyway, this seems like a common problem without a perfect solution, and 
I'm sure people must have hindsight opinions on how they solved it. Your 
thoughts?


If each attribute can only occur once for any object, and if your
queries are based more on exact matches of the values rather than
range queries, you might want to look into the hstore module:
http://www.postgresql.org/docs/current/static/hstore.html

If attributes can occur more than once, you might even want to
serialize the whole object as some text format (yaml, xml, etc)
instead.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ANALYZE getting dead tuple count hopelessly wrong

2008-03-31 Thread Stuart Brooks
I have a table with about 15 million rows which is constantly having 
tuples added to the head and deleted in blocks from the tail to maintain 
the size. The dead tuple count in pg_stat_user_tables tracks the deleted 
rows fairly accurately until an auto-ANALYZE is done in the background 
at which point the value it calculates is wrong by a factor of 2-3 times 
(calculated value is 30-50% of the correct value), which completely 
throws the auto-VACUUMing. An example is that the auto-VACUUM only ran 
when there were 12 million (real) dead rows! Any ideas?


Thanks
Stuart

PS. Running 8.3.1 on NetBSD 3.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using tables in other PostGreSQL database

2008-03-31 Thread Daniel Verite

Scott Marlowe wrote:



>  > Even in Oracle you don't have cross db queries.
>
>  On the contrary you do. You can refer to objects in another 

database by
>  [EMAIL PROTECTED], very useful to mix local and remote data 

in no

>  time. DBLINK_NAME represents a connection to another database.
>  What you don't have is OTHERDB.OBJECT_NAME to refer to a different
>  database within the same instance, because there is only one 

database

>  in an Oracle instance.

What you are talking about are cross schema references, not cross db.


No I'm definitively referring to cross db, not cross schema.
See this piece from Oracle documentation:
http://download.oracle.com/docs/cd/B14117_01/server.101/b10759/sql_elem 
ents009.htm#i27761



Oracle instances can have > 1 database, it's just not that common.  I
know this because we had an internal instance at the last company I
worked at that had 2 databases in it, each with their own schemas.  

Or

maybe they somehow had two instances of oracle running on the same
box.


CREATE DATABASE is to Oracle what initdb is to PG, it's something you 
do once per instance. So no, an Oracle instance doesn't have >1 
databases, just like a PG instance doesn't have >1 data directories.


Regards,

--
Daniel
PostgreSQL-powered mail user agent and storage: 
http://www.manitou-mail.org


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Removing redundant itemsets

2008-03-31 Thread Craig Ringer
Allan Kamau wrote:
> Hi all,
> I have a plain sql problem (didn't know where else to post it).

pgsql-sql

> I have a list of transactions (market basket) ...

Your use of the term "transaction" for a database entity is really
confusing, and will make it significantly harder for others to
understand and work with your schema. It might be worth renaming the
entity to avoid the conflict with the meaning of "transaction" as
"atomic unit of work" as controlled by BEGIN/COMMIT/ROLLBACK .

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Removing redundant itemsets

2008-03-31 Thread Allan Kamau
Thanks, I have "reworded" the email (replaced "transaction" with 
"purchase"). The email has now been sent to the pgsql-sql mailing list.



Craig Ringer wrote:

Allan Kamau wrote:
  

Hi all,
I have a plain sql problem (didn't know where else to post it).



pgsql-sql

  

I have a list of transactions (market basket) ...



Your use of the term "transaction" for a database entity is really
confusing, and will make it significantly harder for others to
understand and work with your schema. It might be worth renaming the
entity to avoid the conflict with the meaning of "transaction" as
"atomic unit of work" as controlled by BEGIN/COMMIT/ROLLBACK .

--
Craig Ringer
  



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Removing redundant itemsets

2008-03-31 Thread Allan Kamau

Hi all,
I have a plain sql problem (didn't know where else to post it).
I have a list of transactions (market basket) and I would like to select 
non redundant longest possible patterns by eliminating 
(creating/populating other table to contain only non redandant itemsets) 
transactions having item lists which are fully included in at least one 
other transaction.


(Am assuming all the items of all the transactions have met the minimum 
support currently set at 1)


Below is a sample case, table schema and data(DDL and DML)

Transaction   Itemset
'100'   'a','b','c','d'
'200'   'c','d'
'300'   'a','c','e'
'400'   'e','d'

On successful removal out of 'redanduant' or smaller transactions having 
items contained in totality by at least one other transaction, the 
transaction '200' would be weeded out as it's itemset {'c','d'} is 
contained in '100' {'a','b','c','d'} transaction.



drop sequence if exists togo_seq cascade;
create sequence togo_seq;
drop table if exists togo cascade;
create table togo
(
id integer not null default nextval('togo_seq')
,tid char(3) not null
,item char(1) not null
,primary key(id)
,unique(tid,item)
)
;
insert into togo(tid,item)values('100','b');
insert into togo(tid,item)values('100','a');
insert into togo(tid,item)values('100','c');
insert into togo(tid,item)values('100','d');
insert into togo(tid,item)values('200','c');
insert into togo(tid,item)values('200','d');
insert into togo(tid,item)values('300','a');
insert into togo(tid,item)values('300','c');
insert into togo(tid,item)values('300','e');
insert into togo(tid,item)values('400','e');
insert into togo(tid,item)values('400','d');



Allan.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] round() and cast to numeric

2008-03-31 Thread Ivan Sergio Borgonovo
I've some real around and some round as well.
round() just accept dp and numeric.

What' going to be the difference between casting to numeric and
rounding?

thx

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgreSQL multithreading

2008-03-31 Thread Roberts, Jon


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
> Sent: Sunday, March 30, 2008 10:13 AM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] postgreSQL multithreading
> 
> Hi all,
> 
> I was trying to find some way to implement multithreading into my
> postgreSQL stored functions.
> 

You can use pgAgent to submit a job for each thread.  


Jon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ANALYZE getting dead tuple count hopelessly wrong

2008-03-31 Thread Pavan Deolasee
On Mon, Mar 31, 2008 at 1:33 PM, Stuart Brooks <[EMAIL PROTECTED]> wrote:
> I have a table with about 15 million rows which is constantly having
>  tuples added to the head and deleted in blocks from the tail to maintain
>  the size. The dead tuple count in pg_stat_user_tables tracks the deleted
>  rows fairly accurately until an auto-ANALYZE is done in the background
>  at which point the value it calculates is wrong by a factor of 2-3 times
>  (calculated value is 30-50% of the correct value)

(copying -hackers)

Seems like the redirected-dead line pointers are playing spoil-sport here.
In this particular example, the deleted tuples may get truncated to
redirected-dead line pointers. Analyze would report them as empty
slots and not as dead tuples. So in the worst case, if all the deleted
tuples are already truncated to redirected-dead line pointers, analyze
may report "zero" dead tuple count.

This is a slightly tricky situation because in normal case we might want
to delay autovacuum to let subsequent UPDATEs in the page to reuse
the space released by  the deleted tuples. But in this particular example,
delaying autovacuum is not a good thing because the relation would
just keep growing.

I think we should check for redirected-dead line pointers in analyze.c
and report them as dead tuples. The other longer term alternative
could be to track redirected-dead line pointers and give them some
weightage while deciding on autovacuum. We can also update the
FSM information of a page when its pruned/defragged so that the page
can also be used for subsequent INSERTs or non-HOT UPDATEs in
other pages. This might be easier said than done.


Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fragments in tsearch2 headline

2008-03-31 Thread Teodor Sigaev

The patch takes into account the corner case of overlap. Here is the
code for that
// start check
if (!startHL && *currentpos >= startpos)
   startHL = 1;

The headline generation will not start until currentpos has gone past
startpos. 

Ok



You can also check how this headline function is working at my website
indiankanoon.com. Some example queries are murder, freedom of speech,
freedom of press etc.

Looks good.


Should I develop the patch for the current cvs head of postgres?


I'd like to commit your patch, but if it should be:
 - for current HEAD
 - as extension of existing ts_headline.

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using tables in other PostGreSQL database

2008-03-31 Thread Roberts, Jon
> Also, 2PC is subject to unresolved transactions (or something like
that).
> 
> >  > Even in Oracle you don't have cross db queries.
> >
> >  On the contrary you do. You can refer to objects in another
database by
> >  [EMAIL PROTECTED], very useful to mix local and remote data
in no
> >  time. DBLINK_NAME represents a connection to another database.
> >  What you don't have is OTHERDB.OBJECT_NAME to refer to a different
> >  database within the same instance, because there is only one
database
> >  in an Oracle instance.
> 
> What you are talking about are cross schema references, not cross db.
> Oracle instances can have > 1 database, it's just not that common.  I
> know this because we had an internal instance at the last company I
> worked at that had 2 databases in it, each with their own schemas.  Or
> maybe they somehow had two instances of oracle running on the same
> box.  I'm no oracle expert, I'm just reporting what I saw with my own
> eye.

An Oracle instance is equal to a single database except for Oracle RAC.
With Oracle RAC, there is an instance per node so a two node RAC would
have two instances running.  But to a developer, these two instances
look like one physical database.  

You can have multiple instances running on the same box.  You just need
to configure the Oracle listener to listen on two different ports; one
for each instance.  The two instances are not tied together at all.  You
could also run two separate Oracle homes and basically have two sets of
binaries installed and run two listeners.  

Using an Oracle DB link, you can link to another database with the
@db_name syntax and get two phase commits.

HOWEVER, you can achieve a two phase commit in PostgreSQL with db_link
and using basic exception handling.

Example:
create table log (update_datetime timestamp);

create or replace function fn_test (p_fail boolean) returns void as
$$
declare
v_sql varchar;
v_int int;

begin
perform dblink_connect('pg', 'dbname=postgres user=scott
password=tiger host=localhost');

v_sql := 'begin;';
perform dblink_exec('pg', v_sql, false);

v_sql := 'insert into log values (now())';
perform dblink_exec('pg', v_sql, false);

if p_fail then
  v_int := 1/0;
end if;

v_sql := 'commit;';
perform dblink_exec('pg', v_sql, false);

perform dblink_disconnect('pg');
exception
  when others then
  v_sql := 'rollback;';
  perform dblink_exec('pg', v_sql, false);
  perform dblink_disconnect('pg');
  raise exception '%', sqlerrm;
end;
$$
language 'plpgsql';


Now that is a basic function that will insert data into the postgres
database via a dblink.  If you pass in fail, it will hit "division by
zero" and rollback both the linked transaction and the primary
transaction of the function.  

select fn_test(false);
select * from log;
--you see a new row

select fn_test(true)
--ERROR:  division by zero
select * from log;
--you see that a new row wasn't inserted.



Jon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Very slow catalog query

2008-03-31 Thread Just Someone
Hi,

I have a DB with a large number schemas (around 10K) and a large
number of tables (400K). The app became slow lately, and logging the
slow queries, I see more than a few like this:

SELECT: LOG:  duration: 169547.424 ms  statement:   SELECT
attr.attname, name.nspname, seq.relname
  FROM pg_class  seq,
   pg_attribute  attr,
   pg_depend dep,
   pg_namespace  name,
   pg_constraint cons
  WHERE seq.oid   = dep.objid
AND seq.relnamespace  = name.oid
AND seq.relkind   = 'S'
AND attr.attrelid = dep.refobjid
AND attr.attnum   = dep.refobjsubid
AND attr.attrelid = cons.conrelid
AND attr.attnum   = cons.conkey[1]
AND cons.contype  = 'p'
AND dep.refobjid  = 'activities'::regclass

Almost all slow queries are of this type, though most of those do
finish really fast. From time to time it gets really slow.

Some details on the setup:
Dual Opteron with 4GB RAM
RAID1 for WAL on 10K SCSI
RAID10 over 6 x 10K scsi drives for main the rest for the DB files

Auto vaccum is on, and in addition I do some vacuuming for specific
high use tables nightly

Any ideas how to start finding the culprit?

Bye,

Guy.


-- 
Family management on rails: http://www.famundo.com
My development related blog: http://devblog.famundo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Very slow catalog query

2008-03-31 Thread Pedro Doria Meunier
On Monday 31 March 2008 15:13:25 Just Someone wrote:
> Hi,
>
> I have a DB with a large number schemas (around 10K) and a large
> number of tables (400K). The app became slow lately, and logging the
> slow queries, I see more than a few like this:
>
> SELECT: LOG:  duration: 169547.424 ms  statement:   SELECT
> attr.attname, name.nspname, seq.relname
>   FROM pg_class  seq,
>pg_attribute  attr,
>pg_depend dep,
>pg_namespace  name,
>pg_constraint cons
>   WHERE seq.oid   = dep.objid
> AND seq.relnamespace  = name.oid
> AND seq.relkind   = 'S'
> AND attr.attrelid = dep.refobjid
> AND attr.attnum   = dep.refobjsubid
> AND attr.attrelid = cons.conrelid
> AND attr.attnum   = cons.conkey[1]
> AND cons.contype  = 'p'
> AND dep.refobjid  = 'activities'::regclass
>
> Almost all slow queries are of this type, though most of those do
> finish really fast. From time to time it gets really slow.
>
> Some details on the setup:
> Dual Opteron with 4GB RAM
> RAID1 for WAL on 10K SCSI
> RAID10 over 6 x 10K scsi drives for main the rest for the DB files
>
> Auto vaccum is on, and in addition I do some vacuuming for specific
> high use tables nightly
>
> Any ideas how to start finding the culprit?
>
> Bye,
>
> Guy.
>
>
> --
> Family management on rails: http://www.famundo.com
> My development related blog: http://devblog.famundo.com

Hi 'Just Someone'

I'm wondering... just 4GB of ram?
What's the "normal" "hammering" -- a.k.a. user access -- to all of this?
PG, as expected, launches a separate process for each connection. this eats up 
resources quite quickly
Did you check your system processes with 'top' ? how's it looking for swap 
usage?

Regards,
-- 
Pedro Doria Meunier
Ips. da Olaria, Edf. Jardins do Garajau, 4, r/c Y
9125-162 Caniço
Madeira - Portugal
--
Skype : pdoriam
Mobile: +351961720188


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Very slow catalog query

2008-03-31 Thread Tom Lane
"Just Someone" <[EMAIL PROTECTED]> writes:
> Any ideas how to start finding the culprit?

EXPLAIN ANALYZE?

What would be particularly interesting is to compare the results for
fast and slow cases of the "same" query.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey: renaming/removing script binaries (createdb, createuser...)

2008-03-31 Thread Steve Crawford
I've only been peripherally watching this thread and this may have been 
mentioned...


One advantage of using a consistent prefix is that when you have 
forgotten the exact name of a rarely used command and you are using a 
shell with readline support, "pg_" will bring up a list of 
available commands.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Array operator "sum array values"

2008-03-31 Thread Scara Maccai
Hi,

is there a syntax that sums the values of an array?

That is, having an array like

{1,3,8},{5,6,7}

something like 

select arr[1:2][2:3]

gives

{1,3},{6,7}


; what I'd like is:

select arr[1$2][2$3]

gives:

17 (that is, 1+3 + 6+7)

If there is no such operator, would it be complicated adding it to postgres? 
I mean: is the "[:]" operator an internal thing (the parser "knows" about it) 
or is it just like any other operators that can be added ?









  ___ 
Scopri il Blog di Yahoo! Mail: trucchi, novità, consigli... e la tua opinione!
http://www.ymailblogit.com/blog/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] ANALYZE getting dead tuple count hopelessly wrong

2008-03-31 Thread Tom Lane
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> Seems like the redirected-dead line pointers are playing spoil-sport here.
> In this particular example, the deleted tuples may get truncated to
> redirected-dead line pointers. Analyze would report them as empty
> slots and not as dead tuples. So in the worst case, if all the deleted
> tuples are already truncated to redirected-dead line pointers, analyze
> may report "zero" dead tuple count.

[ Please see if you can stop using the "redirected dead" terminology ]

Yeah, I think I agree.  The page pruning code is set up so that changing
a line pointer to DEAD state doesn't change the count of dead tuples in
the table, so we are counting unreclaimed DEAD pointers as still being
dead tuples requiring VACUUM.  ANALYZE should surely not affect that.

It looks like there's no trivial way to get ANALYZE to do things that
way, though.  heap_release_fetch() doesn't distinguish a DEAD line
pointer from an unused or redirected one.  But in the current
implementation of ANALYZE there's really no benefit to using
heap_release_fetch anyway --- it always examines all line pointers
on each selected page, so we might as well rewrite it to use a simple
loop more like vacuum uses.

I notice that this'd leave heap_release_fetch completely unused...
at least in HEAD I'd be tempted to get rid of it and restore heap_fetch
to its former simplicity.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Array operator "sum array values"

2008-03-31 Thread Pavel Stehule
Hello

On 31/03/2008, Scara Maccai <[EMAIL PROTECTED]> wrote:
> Hi,
>
>  is there a syntax that sums the values of an array?
>
>  That is, having an array like
>
>  {1,3,8},{5,6,7}
>
>  something like
>
>  select arr[1:2][2:3]
>
>  gives
>
>  {1,3},{6,7}
>
>
>  ; what I'd like is:
>
>  select arr[1$2][2$3]
>
>  gives:
>
>  17 (that is, 1+3 + 6+7)
>
>  If there is no such operator, would it be complicated adding it to postgres?
>  I mean: is the "[:]" operator an internal thing (the parser "knows" about 
> it) or is it just like any other operators that can be added ?
>
>

create or replace function sum_elements(anyarray)
returns anyelement as $$
select sum($1[i])
   from generate_series(array_lower($1,1),
   array_upper($1,1)) g(i);
$$ language sql immutable;

select sum_elements(array[1,2,3]); --> 6

Regards
Pavel Stehule
>
>
>
>
>
>
>
>   ___
>  Scopri il Blog di Yahoo! Mail: trucchi, novità, consigli... e la tua 
> opinione!
>  http://www.ymailblogit.com/blog/
>
>
>  --
>  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>  To make changes to your subscription:
>  http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgsql-general] Daily digest v1.8030 (22 messages)

2008-03-31 Thread Colin Wetherbee

[EMAIL PROTECTED] wrote:
Message Digest 
Volume 1 : Issue 8030 : "index" Format


Messages in this Issue:
  200803/1324: Re: casting from integer to boolean
  200803/1325: Re: casting from integer to boolean 
  200803/1326: Re: casting from integer to boolean

  200803/1327: Re: Survey: renaming/removing script binaries (createdb,
createuser...)
  200803/1328: Re: Survey: renaming/removing script binaries (createdb,
 createuser...)

[snip]

I received eight of these unwanted digests last night and early this 
morning, after never seeing them before.  Is this a new configuration 
change that I need to go personalize?


Thanks.

Colin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgsql-general] Daily digest v1.8030 (22 messages)

2008-03-31 Thread Brett Hoerner
On Mon, Mar 31, 2008 at 10:54 AM, Colin Wetherbee <[EMAIL PROTECTED]> wrote:
>  I received eight of these unwanted digests last night and early this
>  morning, after never seeing them before.  Is this a new configuration
>  change that I need to go personalize?

I was wondering the same thing, but don't see an option regarding digests...

Brett

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Very slow catalog query

2008-03-31 Thread Just Someone
Hi,


>  I'm wondering... just 4GB of ram?
>  What's the "normal" "hammering" -- a.k.a. user access -- to all of this?
>  PG, as expected, launches a separate process for each connection. this eats 
> up
>  resources quite quickly
>  Did you check your system processes with 'top' ? how's it looking for swap
>  usage?

Swap usage is almost nil. And I only have a constant number of
connections (about 10-15) as it serves as the backend for a Web
application. Transactions run at about 20-40 per second, but mostly
very short and simple ones.

Thanks,

Guy.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary Key with serial

2008-03-31 Thread x asasaxax
can anyone do a example for me.. an explain how it works?

Thanks a lot


2008/3/29, Berend Tober <[EMAIL PROTECTED]>:
>
> x asasaxax wrote:
> >I have the following tablecreate table product(cod serial,
> user_cod
> > bigint, constraint product_fk Foreign Key(user_cod) references
> user(cod),
> > constraint product_pk Primary Key(cod, user_cod));
> >
> > What i want to happend is that:
> > user_codcod
> > 1 1
> > 1 2
> > 1 3
> > 2 1
> > 3 1
> > 3 2
> >
> > Can serial do that? ...
>
> No.
>
> > ...what can  i do to make this happen?
>
>
> http://archives.postgresql.org/pgsql-general/2006-08/msg00744.php
>
>
>


Re: [GENERAL] Very slow catalog query

2008-03-31 Thread Just Someone
Hi Tom,

Here is the result of explain analyze (though this one took 1500ms and
not 169000):

On Mon, Mar 31, 2008 at 7:37 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Just Someone" <[EMAIL PROTECTED]> writes:
>  > Any ideas how to start finding the culprit?
>
>  EXPLAIN ANALYZE?


explain analyze SELECT
attr.attname, name.nspname, seq.relname
 FROM pg_class  seq,
  pg_attribute  attr,
  pg_depend dep,
  pg_namespace  name,
  pg_constraint cons
 WHERE seq.oid   = dep.objid
   AND seq.relnamespace  = name.oid
   AND seq.relkind   = 'S'
   AND attr.attrelid = dep.refobjid
   AND attr.attnum   = dep.refobjsubid
   AND attr.attrelid = cons.conrelid
   AND attr.attnum   = cons.conkey[1]
   AND cons.contype  = 'p'
   AND dep.refobjid  = 'activities'::regclass;

  QUERY PLAN

 Nested Loop  (cost=64956.07..65025.73 rows=31 width=192) (actual
time=1547.720..1547.749 rows=1 loops=1)
   ->  Nested Loop  (cost=64956.07..64987.48 rows=1 width=166) (actual
time=1547.662..1547.684 rows=1 loops=1)
 ->  Nested Loop  (cost=64956.07..64981.47 rows=1 width=106)
(actual time=1547.616..1547.631 rows=1 loops=1)
   ->  Merge Join  (cost=64956.07..64957.36 rows=4
width=42) (actual time=1547.484..1547.502 rows=3 loops=1)
 Merge Cond: ("outer"."?column3?" = "inner".refobjsubid)
 ->  Sort  (cost=4.08..4.08 rows=3 width=30)
(actual time=0.149..0.151 rows=1 loops=1)
   Sort Key: cons.conkey[1]
   ->  Index Scan using
pg_constraint_conrelid_index on pg_constraint cons  (cost=0.00..4.05
rows=3 width=30) (actual time=0.110..0.112 rows=1 loops=1)
 Index Cond: (30375069::oid = conrelid)
 Filter: (contype = 'p'::"char")
 ->  Sort  (cost=64951.99..64952.61 rows=247
width=12) (actual time=1547.303..1547.318 rows=9 loops=1)
   Sort Key: dep.refobjsubid
   ->  Index Scan using
pg_depend_reference_index on pg_depend dep  (cost=0.00..64942.17
rows=247 width=12) (actual time=396.542..1547.172 rows=22 loops=1)
 Index Cond: (refobjid = 30375069::oid)
   ->  Index Scan using pg_class_oid_index on pg_class seq
 (cost=0.00..6.02 rows=1 width=72) (actual time=0.034..0.035 rows=0
loops=3)
 Index Cond: (seq.oid = "outer".objid)
 Filter: (relkind = 'S'::"char")
 ->  Index Scan using pg_namespace_oid_index on pg_namespace
name  (cost=0.00..6.00 rows=1 width=68) (actual time=0.039..0.041
rows=1 loops=1)
   Index Cond: ("outer".relnamespace = name.oid)
   ->  Index Scan using pg_attribute_relid_attnum_index on
pg_attribute attr  (cost=0.00..38.00 rows=20 width=70) (actual
time=0.050..0.052 rows=1 loops=1)
 Index Cond: ((30375069::oid = attr.attrelid) AND (attr.attnum
= "outer".refobjsubid))
 Total runtime: 1548.082 ms


Bye,

Guy.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Very slow catalog query

2008-03-31 Thread Tom Lane
"Just Someone" <[EMAIL PROTECTED]> writes:
> Here is the result of explain analyze (though this one took 1500ms and
> not 169000):

Well, it's hard to be sure what the problem is when you're not showing
us a problem case ...  but I notice that this indexscan is estimated
awfully high:

>->  Index Scan using
> pg_depend_reference_index on pg_depend dep  (cost=0.00..64942.17
> rows=247 width=12) (actual time=396.542..1547.172 rows=22 loops=1)
>  Index Cond: (refobjid = 30375069::oid)

The reason is not far to seek: the scan is checking only the second
index key, meaning that it has to scan the entire index.  (I am
surprised it didn't use a seqscan instead.  Are you using enable_seqscan
= off?  Not a great idea.)  Since you know you are looking for a table,
you could improve matters by adding a constraint on refclassid:

dep.refclassid = 'pg_class'::regclass

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema design question

2008-03-31 Thread Andrej Ricnik-Bay
On 29/03/2008, Ben <[EMAIL PROTECTED]> wrote:
> I'm working on a project which requires me to keep track of objects,
>  each of which can have an arbitrary number of attributes. Although
>  there will be many attributes that an object can have, the data types
>  of those attributes won't be all that varried (int, float, text,
>  boolean, date, etc.).

And a somewhat unorthodox suggestion for the list ... would it
be worthwhile considering a different storage mechanism all
together, like maybe an LDAP directory type of thing?  The query
language is admittedly very limited.

Cheers,
Andrej

-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgsql-general] Daily digest v1.8030 (22 messages)

2008-03-31 Thread Tom Lane
"Brett Hoerner" <[EMAIL PROTECTED]> writes:
> On Mon, Mar 31, 2008 at 10:54 AM, Colin Wetherbee <[EMAIL PROTECTED]> wrote:
>> I received eight of these unwanted digests last night and early this
>> morning, after never seeing them before.  Is this a new configuration
>> change that I need to go personalize?

> I was wondering the same thing, but don't see an option regarding digests...

Did you look closely at the Received: history of the messages?

We occasionally get incidents where someone's broken mail software
regurgitates delayed copies of list mail, either back to the list
daemon (which is usually successful at recognizing duplicates, but
not always) or directly to people named in the From: or To: or CC:
lines of the mail.  To no one's surprise, the mail software that
does this is invariably Microsoft's.

Regurgitating list mail is grounds for ejection from the lists,
if we can figure out who's doing it (it will probably not surprise you
to learn that M$' broken software often makes it hard to figure out
which list subscriber is responsible).

If you get multiple duplicates, and their headers show that they
went through some unexpected places between postgresql.org and you,
send 'em to the list owner (eg, pgsql-general-owner at postgresql.org)
for possible action.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgsql-general] Daily digest v1.8030 (22 messages)

2008-03-31 Thread Colin Wetherbee

Tom Lane wrote:

"Brett Hoerner" <[EMAIL PROTECTED]> writes:

On Mon, Mar 31, 2008 at 10:54 AM, Colin Wetherbee
<[EMAIL PROTECTED]> wrote:

I received eight of these unwanted digests last night and early
this morning, after never seeing them before.  Is this a new
configuration change that I need to go personalize?



I was wondering the same thing, but don't see an option regarding
digests...


Did you look closely at the Received: history of the messages?

We occasionally get incidents where someone's broken mail software 
regurgitates delayed copies of list mail, either back to the list 
daemon (which is usually successful at recognizing duplicates, but 
not always) or directly to people named in the From: or To: or CC: 
lines of the mail.  To no one's surprise, the mail software that does

this is invariably Microsoft's.


Hmm, it doesn't seem to have to do with duplicates, since the eight 
digests were different.  On the other hand, I think you're probably 
right about the these being related to bounces.


The following are the Received headers from before the list daemon sent 
me the first digest.


Received: from marte.gdl.cinvestav.mx (gdl.cinvestav.mx [148.247.21.3])
by postgresql.org (Postfix) with ESMTP id 5D7092E0030
for ; Fri, 28 Mar 2008 15:37:09 -0300 
(ADT)
Received: by marte.gdl.cinvestav.mx (Postfix, from userid 0)
id 7DCB42080EF39; Fri, 28 Mar 2008 12:34:40 -0600 (CST)
Received: from maia-1.hub.org (maia-1.hub.org [200.46.204.191])
by marte.gdl.cinvestav.mx (Postfix) with ESMTP id 170CC20099FA3
for <[EMAIL PROTECTED]>; Thu, 27 Mar 2008 06:17:04 -0600 (CST)
Received: from postgresql.org (postgresql.org [200.46.204.71])
by maia-1.hub.org (Postfix) with ESMTP id 055E7DB6780;
Thu, 27 Mar 2008 09:16:56 -0300 (ADT)

Perhaps <[EMAIL PROTECTED]> is the culprit here.  I'm CCing 
owner on this message.


Thanks for the suggestion, Tom.

Colin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey: renaming/removing script binaries (createdb, createuser...)

2008-03-31 Thread Andrej Ricnik-Bay
On 01/04/2008, Steve Crawford <[EMAIL PROTECTED]> wrote:
>  One advantage of using a consistent prefix is that when you have
>  forgotten the exact name of a rarely used command and you are using a
>  shell with readline support, "pg_" will bring up a list of
>  available commands.
For any value of shell IN {bash, tcsh, zsh}.
sh (default on solaris) and ksh won't, and neither will cmd.exe

But I (as a user of bash) see your point.


>  Cheers,
>
> Steve
Cheers,
Andrej

-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can Postgres 8.x start if some disks containing tablespaces are not mounted?

2008-03-31 Thread Morris Goldstein
Suppose I have a database with $PGDATA on /dev/sda, and a tablespace
directory on /dev/sdb. Will Postgres start successfully if /dev/sda is
mounted and /dev/sdb is not? If not, why not?

Morris


Re: [GENERAL] Schema design question

2008-03-31 Thread Ben

On Tue, 1 Apr 2008, Andrej Ricnik-Bay wrote:


On 29/03/2008, Ben <[EMAIL PROTECTED]> wrote:

I'm working on a project which requires me to keep track of objects,
 each of which can have an arbitrary number of attributes. Although
 there will be many attributes that an object can have, the data types
 of those attributes won't be all that varried (int, float, text,
 boolean, date, etc.).


And a somewhat unorthodox suggestion for the list ... would it
be worthwhile considering a different storage mechanism all
together, like maybe an LDAP directory type of thing?  The query
language is admittedly very limited.


Hm worthwhile for some, perhaps, but not for this project. This is 
only a small part of a much larger whole, most of which fits quite well 
into SQL. Thanks though! I think I'm leaning towards the 
one-table-with-many-columns approach, as it seems to require the simpliest 
set of constraints while still giving me type-safety.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can Postgres 8.x start if some disks containing tablespaces are not mounted?

2008-03-31 Thread Zdenek Kotala

Morris Goldstein napsal(a):

Suppose I have a database with $PGDATA on /dev/sda, and a tablespace
directory on /dev/sdb. Will Postgres start successfully if /dev/sda is
mounted and /dev/sdb is not? If not, why not?


It is not good idea to run PostgreSQL in your scenario. However PostgeSQL needs 
only catalog tables to start. Until you don't touch data stored on unmounted 
disk you should not get any error.



Zdenek

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can Postgres 8.x start if some disks containing tablespaces are not mounted?

2008-03-31 Thread Tom Lane
"Morris Goldstein" <[EMAIL PROTECTED]> writes:
> Suppose I have a database with $PGDATA on /dev/sda, and a tablespace
> directory on /dev/sdb. Will Postgres start successfully if /dev/sda is
> mounted and /dev/sdb is not? If not, why not?

It will start, but you will have unpleasant failures when you try to use
tables in the secondary tablespace ... note that if autovacuum is on,
that is likely to happen even without any explicit action on your part.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] ANALYZE getting dead tuple count hopelessly wrong

2008-03-31 Thread Pavan Deolasee
On Mon, Mar 31, 2008 at 9:02 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

>
>  [ Please see if you can stop using the "redirected dead" terminology ]
>
>

Apologies, will keep that in mind. Seems like a hang-over from the past :-)

>  Yeah, I think I agree.  The page pruning code is set up so that changing
>  a line pointer to DEAD state doesn't change the count of dead tuples in
>  the table, so we are counting unreclaimed DEAD pointers as still being
>  dead tuples requiring VACUUM.  ANALYZE should surely not affect that.
>
>  It looks like there's no trivial way to get ANALYZE to do things that
>  way, though.  heap_release_fetch() doesn't distinguish a DEAD line
>  pointer from an unused or redirected one.  But in the current
>  implementation of ANALYZE there's really no benefit to using
>  heap_release_fetch anyway --- it always examines all line pointers
>  on each selected page, so we might as well rewrite it to use a simple
>  loop more like vacuum uses.
>

I agree. I would write a patch on these lines, unless you are already on to it.


Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Very slow catalog query

2008-03-31 Thread Just Someone
Hi Tom,


>  Well, it's hard to be sure what the problem is when you're not showing
>  us a problem case ...  but I notice that this indexscan is estimated
>  awfully high:

Whenever I do it manually it works fast. But in the log I see lots of
slow ones. Could it be caused by auto vacuum? Or by check pointing or
WAL writing? Are there way to check that?


>  >->  Index Scan using
>  > pg_depend_reference_index on pg_depend dep  (cost=0.00..64942.17
>  > rows=247 width=12) (actual time=396.542..1547.172 rows=22 loops=1)
>  >  Index Cond: (refobjid = 30375069::oid)
>
>  The reason is not far to seek: the scan is checking only the second
>  index key, meaning that it has to scan the entire index.  (I am
>  surprised it didn't use a seqscan instead.  Are you using enable_seqscan
>  = off?  Not a great idea.)  Since you know you are looking for a table,
>  you could improve matters by adding a constraint on refclassid:
>
> dep.refclassid = 'pg_class'::regclass

enable_setscan is on. Is there a way to analyze/vacuum those tables?

I will look if I can also improve the query to be more exact.

Bye,

Guy.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can Postgres 8.x start if some disks containing tablespaces are not mounted?

2008-03-31 Thread Gurjeet Singh
On Mon, Mar 31, 2008 at 11:40 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Morris Goldstein" <[EMAIL PROTECTED]> writes:
> > Suppose I have a database with $PGDATA on /dev/sda, and a tablespace
> > directory on /dev/sdb. Will Postgres start successfully if /dev/sda is
> > mounted and /dev/sdb is not? If not, why not?
>
> It will start, but you will have unpleasant failures when you try to use
> tables in the secondary tablespace ... note that if autovacuum is on,
> that is likely to happen even without any explicit action on your part.
>
>
One of the gripes I have with postgres is that, that it won't even complain
if one of the segments of a relation goes missing unless the missing segment
is referred to by an index!!!

The most troublesome part is that count(*) (i.e seq scan) scans only upto
the last sequential segment found. Here's a case in example:

Healthy:

count(*) : 2187001
size: 2441 MB
segments: 17651, .1, .2

Corrupt: 17651.1 missing
-
count(*) : 917503
size: 1024 MB
segments: 17651, .2
select max(a) from temp: 2187001 (uses index to locate the last tuple in
segment .2)

select a from temp where a = (select max(a) from temp)/2
ERROR:  could not read block 156214 of relation 1663/11511/17651: read only
0 of 8192 bytes

retore missing segment:
---
select a from temp where a = (select max(a) from temp)/2
  : 1093500


I think that the counter-argument would be that this has never been
reported in the field, but I wish our metadata records this somehow, and
reports an ERROR if it finds that a segment is missing.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


[GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread mark
is the query I am running , and it takes over 10 seconds to complete this
query...


update users set number_recieved=number_recieved+1 where uid=738889333;

table has about 1.7 million rows.. i have an index on column uid and also on
number_received. .. this is also slowing down the inserts that happen.
how to fix this?


i have totally 6 different database of similar size in the same machine..j
The hardware is dual quad core intel xeon5405 , and 32GB RAM, and following
settings changed..
max_connections = 2000  # (change requires restart)
shared_buffers = 4000MB # min 128kB or max_connections*16kB
effective_cache_size = 12000MB



CREATE INDEX idx_uid
  ON users
  USING btree
  (uid);

CREATE INDEX number_rx
  ON users
  USING btree
  (number_recieved);



table

CREATE TABLE users
(
  id serial NOT NULL,
  username text,
  first_name text,
  last_name text,
  email text,
  "password" text,
  last_login timestamp without time zone,
  profilepic text,
  ip text,
  dob timestamp without time zone,
  created timestamp without time zone DEFAULT now(),
  rawpassword character varying(128),
  rating real DEFAULT 0,
  zip text,
  hash text,
  gender character(1),
  groups text,
  aim text,
  yahoo text,
  skype text,
  hotmail text,
  vanity text,
  number_comments integer DEFAULT 0,
  number_friends integer DEFAULT 0,
  number_posts integer DEFAULT 0,
  number_polls integer DEFAULT 0,
  city text,
  site text,
  number_pictures bigint DEFAULT 0,
  email_subscribe boolean DEFAULT true,
  number_userpics integer DEFAULT 0,
  htmlcodes text,
  pageviews integer DEFAULT 1,
  number_uservideos integer DEFAULT 0,
  number_useraudios integer DEFAULT 0,
  number_usermessages integer DEFAULT 0,
  number_usermessages_sent integer DEFAULT 0,
  myrand double precision NOT NULL DEFAULT random(),
  number_activities integer DEFAULT 0,
  number_pages integer DEFAULT 0,
  uid integer NOT NULL,
  number_invites integer DEFAULT 0,
  number_notifications integer DEFAULT 0,
  number_emailnotifications integer DEFAULT 0,
  number_pendingnotifications integer DEFAULT 0,
  total_number_invites integer DEFAULT 0,
  total_number_emailnotifications integer DEFAULT 0,
  last_invitation timestamp without time zone,
  last_emailnotification timestamp without time zone,
  session_key text,
  last_profilefbml timestamp without time zone,
  "name" text,
  number_sent integer DEFAULT 0,
  number_recieved integer DEFAULT 0,
  score integer DEFAULT 0,
  number_votes integer DEFAULT 0,
  CONSTRAINT users_pkey PRIMARY KEY (id),
  CONSTRAINT unique_uid UNIQUE (uid),
  CONSTRAINT uniquemail UNIQUE (email),
  CONSTRAINT uniquuser UNIQUE (username)
)
WITH (OIDS=FALSE);
ALTER TABLE users OWNER TO postgres;


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread Raymond O'Donnell

On 31/03/2008 20:16, mark wrote:
is the query I am running , and it takes over 10 seconds to complete 
this query...



update users set number_recieved=number_recieved+1 where uid=738889333;

table has about 1.7 million rows.. i have an index on column uid and 
also on number_received. .. this is also slowing down the inserts that 
happen.


Are you VACUUMing the table regularly?

Also, can you show us the EXPLAIN ANALYZE output from the query?

Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread mark
On Mon, Mar 31, 2008 at 12:23 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:

> On 31/03/2008 20:16, mark wrote:
> > is the query I am running , and it takes over 10 seconds to complete
> > this query...
> >
> >
> > update users set number_recieved=number_recieved+1 where uid=738889333;
> >
> > table has about 1.7 million rows.. i have an index on column uid and
> > also on number_received. .. this is also slowing down the inserts that
> > happen.
>
> Are you VACUUMing the table regularly?

I have this setting  on in postgresql.conf.. I dont manually do vaccum..

autovacuum = on # Enable autovacuum subprocess?  'on'




> Also, can you show us the EXPLAIN ANALYZE output from the query?


EXPLAIN ANALYZE update users set number_recieved=number_recieved+1 where
uid=738889333;
QUERY PLAN
--
 Index Scan using idx_uid on users  (cost=0.00..8.46 rows=1 width=1073)
(actual time=0.094..0.161 rows=1 loops=1)
   Index Cond: (uid = 738889333)
 Total runtime: 11479.053 ms
(3 rows)


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread Raymond O'Donnell

On 31/03/2008 20:38, mark wrote:


I dont manually do vaccum..


It might be worth doing one and seeing if it makes a difference.

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Replication with read-only access to standby DB

2008-03-31 Thread Vivek Khera


On Mar 25, 2008, at 4:28 PM, Jeff Davis wrote:
This obviously does not work in real time, but it may be useful. It  
does

not require a lot of additional space to do this because of the ZFS
copy-on-write implementation.


But what benefit does it give you if you're pounding on the same set  
of physical disks?  You might as well run it on the original since  
you're limit is the disk I/O.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread Raymond O'Donnell

On 31/03/2008 20:38, mark wrote:

EXPLAIN ANALYZE update users set number_recieved=number_recieved+1 where 
uid=738889333;

QUERY PLAN
--
 Index Scan using idx_uid on users  (cost=0.00..8.46 rows=1 width=1073) 
(actual time=0.094..0.161 rows=1 loops=1)

   Index Cond: (uid = 738889333)
 Total runtime: 11479.053 ms


Is there anything going on that might affect the time taken? - triggers, 
loads of indices to be updated, etc? From the docs[1]:


"For INSERT, UPDATE, and DELETE commands, the total run time might be 
considerably larger, because it includes the time spent processing the 
result rows. In these commands, the time for the top plan node 
essentially is the time spent computing the new rows and/or locating the 
old ones, but it doesn't include the time spent applying the changes. 
Time spent firing triggers, if any, is also outside the top plan node, 
and is shown separately for each trigger."


Ray.

[1] http://www.postgresql.org/docs/8.3/static/using-explain.html

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread mark
On Mon, Mar 31, 2008 at 12:48 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:

> On 31/03/2008 20:38, mark wrote:
>
> > EXPLAIN ANALYZE update users set number_recieved=number_recieved+1 where
> > uid=738889333;
> > QUERY PLAN
> >
> --
> >  Index Scan using idx_uid on users  (cost=0.00..8.46 rows=1 width=1073)
> > (actual time=0.094..0.161 rows=1 loops=1)
> >Index Cond: (uid = 738889333)
> >  Total runtime: 11479.053 ms
>
> Is there anything going on that might affect the time taken? - triggers,
> loads of indices to be updated, etc? From the docs[1]:

no, there are no triggers at all.. just the index on the column..
can you explain what the numbers mean in the EXPLAIN ANALYZE?
(cost=0.00..8.46 rows=1 width=1073)  (actual time=0.094..0.161 rows=1
loops=1)
thanks


Re: [HACKERS] [GENERAL] ANALYZE getting dead tuple count hopelessly wrong

2008-03-31 Thread Tom Lane
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On Mon, Mar 31, 2008 at 9:02 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> It looks like there's no trivial way to get ANALYZE to do things that
>> way, though.  heap_release_fetch() doesn't distinguish a DEAD line
>> pointer from an unused or redirected one.  But in the current
>> implementation of ANALYZE there's really no benefit to using
>> heap_release_fetch anyway --- it always examines all line pointers
>> on each selected page, so we might as well rewrite it to use a simple
>> loop more like vacuum uses.

> I agree. I would write a patch on these lines, unless you are already on to 
> it.

Please do --- I have a lot of other stuff on my plate.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgsql-general] Daily digest v1.8030 (22 messages)

2008-03-31 Thread Rainer Bauer
Tom Lane wrote:

>"Brett Hoerner" <[EMAIL PROTECTED]> writes:
>> On Mon, Mar 31, 2008 at 10:54 AM, Colin Wetherbee <[EMAIL PROTECTED]> wrote:
>>> I received eight of these unwanted digests last night and early this
>>> morning, after never seeing them before.  Is this a new configuration
>>> change that I need to go personalize?
>
>> I was wondering the same thing, but don't see an option regarding digests...
>
>Did you look closely at the Received: history of the messages?

They turned up on usenet as well (see below). Seems like
<[EMAIL PROTECTED]> has a problem.

Rainer

Path: news.hub.org!postgresql.org!pgsql-general-owner+m130868
From: [EMAIL PROTECTED]
Newsgroups: pgsql.general
Subject: [pgsql-general] Daily digest v1.8030 (22 messages)
Date: Thu, 27 Mar 2008 05:56:10 -0300
Organization: Hub.Org Networking Services
Lines: 38
Sender: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>
NNTP-Posting-Host: news.hub.org
Mime-Version: 1.0
Content-Type: text/plain
Content-Transfer-Encoding: 8bit
X-Trace: news.hub.org 1206832273 86042 200.46.204.72 (29 Mar 2008 23:11:13
GMT)
X-Complaints-To: [EMAIL PROTECTED]
NNTP-Posting-Date: Sat, 29 Mar 2008 23:11:13 + (UTC)
Content-Disposition: inline
X-Received: from hub.org (hub.org [200.46.204.220])
by news.hub.org (8.14.1/8.14.1) with ESMTP id m2TNBDuJ086034
for <[EMAIL PROTECTED]>; Sat, 29 Mar 2008 20:11:13 -0300
(ADT)
(envelope-from [EMAIL PROTECTED])
X-Received: from localhost (unknown [200.46.204.184])
by hub.org (Postfix) with ESMTP id 0121A1688661
for <[EMAIL PROTECTED]>; Sat, 29 Mar 2008 20:11:18 -0300
(ADT)
X-Received: from hub.org ([200.46.204.220])
 by localhost (mx1.hub.org [200.46.204.184]) (amavisd-maia, port 10024)
 with ESMTP id 61755-02 for <[EMAIL PROTECTED]>;
 Sat, 29 Mar 2008 20:11:10 -0300 (ADT)
X-Received: from postgresql.org (postgresql.org [200.46.204.71])
by hub.org (Postfix) with ESMTP id 8DB711688644
for <[EMAIL PROTECTED]>; Sat, 29 Mar 2008 20:11:17 -0300
(ADT)
X-Received: from localhost (unknown [200.46.204.183])
by postgresql.org (Postfix) with ESMTP id DD2202E0031
for <[EMAIL PROTECTED]>; Fri, 28 Mar 2008
15:43:10 -0300 (ADT)
X-Received: from postgresql.org ([200.46.204.71])
 by localhost (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)
 with ESMTP id 69425-02 for <[EMAIL PROTECTED]>;
 Fri, 28 Mar 2008 15:43:03 -0300 (ADT)
X-Greylist: from auto-whitelisted by SQLgrey-1.7.5
X-Received: from marte.gdl.cinvestav.mx (gdl.cinvestav.mx [148.247.21.3])
by postgresql.org (Postfix) with ESMTP id 77EE22E0030
for ; Fri, 28 Mar 2008 15:43:03 -0300
(ADT)
X-Received: by marte.gdl.cinvestav.mx (Postfix, from userid 0)
id 9F5A92080FE02; Fri, 28 Mar 2008 12:41:55 -0600 (CST)
X-Received: from maia-1.hub.org (maia-1.hub.org [200.46.204.191])
by marte.gdl.cinvestav.mx (Postfix) with ESMTP id AB7D02056B5CE
for <[EMAIL PROTECTED]>; Thu, 27 Mar 2008 03:13:48 -0600
(CST)
X-Received: from postgresql.org (postgresql.org [200.46.204.71])
by maia-1.hub.org (Postfix) with ESMTP id D0D25DB6A94;
Thu, 27 Mar 2008 06:13:44 -0300 (ADT)
X-Mailer: MIME-tools 5.420 (Entity 5.420)
X-To: pgsql-general@postgresql.org
X-Mailing-List: pgsql-general
X-Precedence: bulk
X-Cinvestav-CTS-MailScanner-Information: Please contact the ISP for more
information
X-MailScanner-ID: 9F5A92080FE02.C88E5
X-Cinvestav-CTS-MailScanner: Found to be clean
X-Cinvestav-CTS-MailScanner-From: [EMAIL PROTECTED]
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Mailing-List: pgsql-general
X-List-Archive: 
X-List-Help: 
X-List-ID: 
X-List-Owner: 
X-List-Post: 
X-List-Subscribe: 
X-List-Unsubscribe:

X-Precedence: bulk
Xref: news.hub.org pgsql.general:58447

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread Raymond O'Donnell

On 31/03/2008 20:51, mark wrote:


can you explain what the numbers mean in the EXPLAIN ANALYZE?
(cost=0.00..8.46 rows=1 width=1073)  (actual time=0.094..0.161 rows=1 
loops=1)


It's worth reading through the docs at that reference in my previous 
email - it's well explained there.


As I understand it, "cost" signifies the planner's estimate of how long 
the query will take, in terms of disk page fetches. The "actual time" is 
in milliseconds.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread mark
On Mon, Mar 31, 2008 at 12:59 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:

> On 31/03/2008 20:51, mark wrote:
>
> > can you explain what the numbers mean in the EXPLAIN ANALYZE?
> > (cost=0.00..8.46 rows=1 width=1073)  (actual time=0.094..0.161 rows=1
> > loops=1)
>
> It's worth reading through the docs at that reference in my previous
> email - it's well explained there.
>
> As I understand it, "cost" signifies the planner's estimate of how long
> the query will take, in terms of disk page fetches. The "actual time" is
> in milliseconds.


this is my RAID and HD configuration.. is this a good enough configuration?
is this slowing down my queries?

Raid Controller: 3Ware 9650SE-4LPML, 4x CHs Multi-Lane, LP PCI-e(4x) SATA2
Raid Controller, 256MB
HDD: 4x 500GB/7200 RPM, U300, SATA2 Drive, 16M (RAID 10)

the IO wait is most of the times 10-30%

top - 13:02:35 up 33 days,  4:05,  3 users,  load average: 2.65, 2.82, 2.61
Tasks: 1040 total,   1 running, 1039 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.7%us,  0.6%sy,  0.0%ni, 85.3%id, 13.2%wa,  0.0%hi,  0.0%si,
0.0%st


[GENERAL] FATAL: bogus freespace amount

2008-03-31 Thread Carlos H. Reimer
Hi,

We are facing the following problem during the PostgreSQL 8.2.4 startup in a
Windows XP/SP2 box:


2008-03-31 13:35:12  1: LOG:  database system was interrupted at 2008-03-31
09:45:44
2008-03-31 13:35:12  2: LOG:  checkpoint record is at 0/7588FFD8
2008-03-31 13:35:12  3: LOG:  redo record is at 0/7588FFD8; undo record is
at 0/0; shutdown TRUE
2008-03-31 13:35:12  4: LOG:  next transaction ID: 0/100487; next OID:
909454
2008-03-31 13:35:12  5: LOG:  next MultiXactId: 1; next MultiXactOffset: 0
2008-03-31 13:35:12  6: LOG:  database system was not properly shut down;
automatic recovery in progress
2008-03-31 13:35:12  7: LOG:  record with zero length at 0/75890038
2008-03-31 13:35:12  8: LOG:  redo is not required
2008-03-31 13:35:12  9: LOG:  database system is ready
2008-03-31 13:35:12  10: FATAL:  bogus freespace amount
2008-03-31 13:35:12  1: LOG:  startup process (PID 856) exited with exit
code 1
2008-03-31 13:35:12  2: LOG:  aborting startup due to startup process
failure
2008-03-31 13:35:13  1: LOG:  logger shutting down

What can be done to bring the system up again?

Thank you in advance!

Reimer


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread mark
On Mon, Mar 31, 2008 at 12:59 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:

> On 31/03/2008 20:51, mark wrote:
>
> > can you explain what the numbers mean in the EXPLAIN ANALYZE?
> > (cost=0.00..8.46 rows=1 width=1073)  (actual time=0.094..0.161 rows=1
> > loops=1)
>
> As I understand it, "cost" signifies the planner's estimate of how long
> the query will take, in terms of disk page fetches. The "actual time" is
> in milliseconds.


it says actual time is 0.161 seconds or milliseconds.. but the total run
time is 11 seconds.. any ideas why this discrepancy?

>  Index Scan using idx_uid on users  (cost=0.00..8.46 rows=1 width=1073)
> (actual time=0.094..0.161 rows=1 loops=1)
>Index Cond: (uid = 738889333)
>  Total runtime: 11479.053 ms


Re: [GENERAL] FATAL: bogus freespace amount

2008-03-31 Thread Tom Lane
"Carlos H. Reimer" <[EMAIL PROTECTED]> writes:
> 2008-03-31 13:35:12  10: FATAL:  bogus freespace amount

This probably means $PGDATA/global/pg_fsm.cache has gotten trashed.
Fortunately that's just a cache --- you should be able to delete the
file and start up.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread Raymond O'Donnell

On 31/03/2008 21:11, mark wrote:

it says actual time is 0.161 seconds or milliseconds.. but the total run 
time is 11 seconds.. any ideas why this discrepancy?


Well, I interpret the docs as implying that the difference between the 
time quoted in the top line of EXPLAIN ANALYZE's output and the time in 
"Total runtime" if effectively the time taken to apply the update. 
Beyond that, I'm guessing.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GSoC Proposal: PL/Mono

2008-03-31 Thread Olexandr Melnyk
After some thinking, I decided it's better for me not to apply for SoC this
year for personal reasons.

On 3/29/08, Shane Ambler <[EMAIL PROTECTED]> wrote:
>
> Olexandr Melnyk wrote:
> > Hello,
> >
> > I'm interested in applying for this year's GSoC program, to work on
> PL/Mono:
> > pluggable procedural language for PostgreSQL. The main purpose of the
> > project is to allow programmers to write stored procedures for
> PostgreSQL
> > database in one of the languages that are supported by Mono platform
> > (similar to how they can do it with PL/Perl and PL/Python).
>
>
>
> From the google site -
> We will begin accepting student applications on Monday, March 24th; the
> deadline for student applications is Monday, March 31, 2008. Please see
> the program timeline for specific deadline times.
>
> You have less than two days to get your application in.
>
>
>
> > As I remember from the past, such attempt already existed and was hosted
> at
> > pgFoundry, but I can't find any files related to it. Does anyone know
> about
> > its status?
>
>
> From what I can find it is a dead project and probably was lost in the
> transfers from the gborg to pgfoundry switch.
>
> If you want the old project as a starting point you will need to find
> someone that has a copy of it. (or who has backups of gborg cvs)
>
>
> Try a search of the mailing list archives for pl/mono and you will find
> a few threads on it. A couple of them might help you find someone with a
> copy of it.
>
>
>
>
>
> --
>
> Shane Ambler
> pgSQL (at) Sheeky (dot) Biz
>
> Get Sheeky @ http://Sheeky.Biz
>



-- 
Sincerely yours,
Olexandr Melnyk
http://omelnyk.net/


Re: [HACKERS] [GENERAL] Connection to PostgreSQL Using Certificate: Wrong Permissions on Private Key File

2008-03-31 Thread korry



Now libpq doesn't have any provision for DETAIL or HINT in its
locally-generated messages at the moment, so we can't just duplicate
the backend message, but we could do something like this example
from elsewhere in libpq:

if (stat_buf.st_mode & (S_IRWXG | S_IRWXO))
{
fprintf(stderr,
libpq_gettext("WARNING: password file \"%s\" has world or group read 
access; permission should be u=rw (0600)\n"),
pgpassfile);
return NULL;
}
  
Hmmm... I'm not crazy about libpq printing error messages to stderr.  
The client application can't intercept those messages. And those 
messages will often get lost - many client applications don't have 
useful stderr streams (think GUI application on Win32).


 -- Korry

--

 Korry Douglas  <[EMAIL PROTECTED]>
 EnterpriseDBhttp://www.enterprisedb.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Check if table or View exists

2008-03-31 Thread Jeff Williams
I am developing an application where I will have a default table and/or
view for a select statement for my application.  If a client has a special
requirement I would like to have a alternate table of the same name in a
different schema with the change structure.

Schema A
   Default table - coretable.foo

Schema B
   Client specific table client.foo

How can I check if a table or alternate view exists in schema B.

ie

if client.foo exists // check for a view or table
  // my code will use client.foo table
else
  // my code will use coretable.foo
endif

Also is it possible to check via function/via in postgress and not have to
rely on my application to do the check.  If so how.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey: renaming/removing script binaries (createdb, createuser...)

2008-03-31 Thread Clemens Schwaighofer

1) What type of names do you prefer?
---
b) new one with pg_ prefix - pg_createdb, pg_creteuser ...

2) How often do you use these tools?
---

a) every day (e.g. in my cron)



3) What name of initdb do you prefer?
-- --



b) pg_initdb

4) How do you perform VACUUM?
-

a) vacuumdb - shell command
b) VACUUM - SQL command



[ Clemens Schwaighofer  -=:~ ]
[ IT Engineer/Manager, TEQUILA\ Japan IT Group   ]
[6-17-2 Ginza Chuo-ku, Tokyo 104-8167, JAPAN ]
[ Tel: +81-(0)3-3545-7703Fax: +81-(0)3-3545-7343 ]
[ http://www.tequila.co.jp   ]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Check if table or View exists

2008-03-31 Thread Jorge Godoy
Em Monday 31 March 2008 22:35:01 Jeff Williams escreveu:
> I am developing an application where I will have a default table and/or
> view for a select statement for my application.  If a client has a special
> requirement I would like to have a alternate table of the same name in a
> different schema with the change structure.
>
> Schema A
>Default table - coretable.foo
>
> Schema B
>Client specific table client.foo
>
> How can I check if a table or alternate view exists in schema B.
>
> ie
>
> if client.foo exists // check for a view or table
>   // my code will use client.foo table
> else
>   // my code will use coretable.foo
> endif
>
> Also is it possible to check via function/via in postgress and not have to
> rely on my application to do the check.  If so how.

I'd give both the same name and make the client schema first in the search 
path.  Then, you should just call the function and it would follow the search 
path order and give you what you want without having to check anything.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey: renaming/removing script binaries (createdb, createuser...)

2008-03-31 Thread F. Jovan Jester
1. a - old notation
2. a
3. e & d
4. b & c

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey: renaming/removing script binaries (createdb, createuser...)

2008-03-31 Thread Stephen Frost
* F. Jovan Jester ([EMAIL PROTECTED]) wrote:
> 1. a - old notation
> 2. a
> 3. e & d
> 4. b & c

*blink*

hmm.  How about 1 and 2?
(is this an April fools joke?)

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [GENERAL] Connection to PostgreSQL Using Certificate: Wrong Permissions on Private Key File

2008-03-31 Thread Tom Lane
"korry" <[EMAIL PROTECTED]> writes:
> Hmmm... I'm not crazy about libpq printing error messages to stderr.  

Me neither, feel free to submit a patch.

The basic problem here is that the obvious fix involves feeding
the message to a PQnoticeProcessor callback, but these messages
occur during connection setup and there's no way to have called
PQsetNoticeProcessor yet.

So I think you've got to invent some green-field API if you want
to improve it, and that means nothing will happen out in the
real world for three to five years :-(

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using tables in other PostGreSQL database

2008-03-31 Thread Pettis, Barry
Wow seems like this post took on a life of it's own.  All I wanted to do
was to be able to use a table that someone else has all ready created.
Seems like somewhere someone mentioned a DBA ( which I'm assuming to be
"Database Administrator" ) well as far as I know we don't have one
though I wish we did.  

The basis of my question comes from the fact that I currently use
"Multiple" access databases.  Each database contains 1 piece of
information ( information that on it's own has no relationship to other
data ), but information in other databases will use items from it in it.
Hence in MSAccess I "LINK" the tables in.  Which I know is nothing more
than a connection.  

I wish I could say that I knew with certainty what schemas are or 2PC
is.  Would be nice if I had exposure to other databases as well.  I'm
sure that I'd have the same questions about MSft's SQL server.  

But I thank the forum here for all the input.  

Regards,
Barry Pettis
 
CSO Atmel Corp
Project Tech

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Daniel Verite
Sent: Monday, March 31, 2008 3:36 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Using tables in other PostGreSQL database


Scott Marlowe wrote:


> >  > Even in Oracle you don't have cross db queries.
> >
> >  On the contrary you do. You can refer to objects in another 
database by
> >  [EMAIL PROTECTED], very useful to mix local and remote data 
in no
> >  time. DBLINK_NAME represents a connection to another database.
> >  What you don't have is OTHERDB.OBJECT_NAME to refer to a different
> >  database within the same instance, because there is only one 
database
> >  in an Oracle instance.
> 
> What you are talking about are cross schema references, not cross db.

No I'm definitively referring to cross db, not cross schema.
See this piece from Oracle documentation:
http://download.oracle.com/docs/cd/B14117_01/server.101/b10759/sql_elem 
ents009.htm#i27761

> Oracle instances can have > 1 database, it's just not that common.  I
> know this because we had an internal instance at the last company I
> worked at that had 2 databases in it, each with their own schemas.  
Or
> maybe they somehow had two instances of oracle running on the same
> box.

CREATE DATABASE is to Oracle what initdb is to PG, it's something you 
do once per instance. So no, an Oracle instance doesn't have >1 
databases, just like a PG instance doesn't have >1 data directories.

Regards,

-- 
 Daniel
 PostgreSQL-powered mail user agent and storage: 
http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing support for python on windows

2008-03-31 Thread juan_carlos

Hello, I have the same problem. 
I've checked plpython.dll dependencies with depends tool and all is correct.
I tried to install plperl with no success.

What can I do?



Magnus Hagander-2 wrote:
> 
> On Tue, Feb 20, 2007 at 05:15:38PM -0500, Rhys Stewart wrote:
>> 
>> 
>> Ok so i am having trouble installing plpython, and found this thread.
>> Howevre, after adding postgresql/bin to the path and the python lib
>> directory to the path i still get:
>> createlang: language installation failed: ERROR:  could not load library 
>> "C:/Pro
>> gram Files/PostgreSQL/8.2/lib/plpython.dll": The specified module could
>> not 
>> be f
>> ound.
>> 
>> so maybe im doing the path thing wrong? i used  :
>> 
>> C:\Program Files\PostgreSQL\8.2\bin>set
>> PATH=C:\WINNT\system32;C:\WINNT;C:\WINNT\System32\Wbem;C:\Python24\Lib;c:\Program
>> Files\PostggreSQL\8.2\bin;c:\Program Files\PostggreSQL\8.2\lib
>> 
>> so if that is correct what else am i missing?
> 
> Please run the depends.exe tool from the Windows Support Tools to
> determine which module it's failing to load.
> 
> //Magnus
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Installing-support-for-python-on-windows-tp8110347p16396272.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update queries take a long time - postgres 8.3.1

2008-03-31 Thread Tomasz Ostrowski
On 2008-03-31 21:16, mark wrote:

> is the query I am running , and it takes over 10 seconds to complete 
> this query...
> update users set number_recieved=number_recieved+1 where
> uid=738889333;

Every time or only sometimes?

If it is sometimes then I think this query is waiting for a checkpoint.
If I'm right then you'll have to tune Postgres to do them more often, so
that there is less work to do each time. You use 4GB of shared buffers
so write-busy database can write gigabytes of data on each checkpoint,
which can take seconds.

Upgrade to 8.3 will help as checkpoint writing algorithms were optimized
there for lots of RAM.

Read this for more info and tuning tips:
http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general