[GENERAL] text search in 8.1

2010-02-22 Thread AI Rumman
Does Postgresql 8.1 support Full Text Search?
If yes, please provide the link about documentation.


Re: [GENERAL] text search in 8.1

2010-02-22 Thread David Fetter
On Mon, Feb 22, 2010 at 02:47:00PM +0600, AI Rumman wrote:
> Does Postgresql 8.1 support Full Text Search?
> If yes, please provide the link about documentation.

It's available as an add-on, but since 8.1 is so close to its end of
life, consider moving to 8.4 first, or if the project is out past Q3
of this year, to 9.0.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] text search in 8.1

2010-02-22 Thread AI Rumman
I have a plan to upgrade database, but right now I have to use text search
indexing for performance improvement.

Following is the rpm status of my server:

[r...@vcrmdev01 ~]# rpm -qa|grep postgres
postgresql-8.1.11-1.el5_1.1
postgresql-python-8.1.11-1.el5_1.1
postgresql-server-8.1.11-1.el5_1.1
postgresql-libs-8.1.11-1.el5_1.1
postgresql-contrib-8.1.11-1.el5_1.1

Please tell me how can I install textsearch here.



On Mon, Feb 22, 2010 at 3:20 PM, David Fetter  wrote:

> On Mon, Feb 22, 2010 at 02:47:00PM +0600, AI Rumman wrote:
> > Does Postgresql 8.1 support Full Text Search?
> > If yes, please provide the link about documentation.
>
> It's available as an add-on, but since 8.1 is so close to its end of
> life, consider moving to 8.4 first, or if the project is out past Q3
> of this year, to 9.0.
>
> Cheers,
> David.
> --
> David Fetter  http://fetter.org/
> Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
> Skype: davidfetter  XMPP: david.fet...@gmail.com
> iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics
>
> Remember to vote!
> Consider donating to Postgres: http://www.postgresql.org/about/donate
>


[GENERAL] Missing clog, PITR

2010-02-22 Thread Patryk Sidzina
Hello everyone,

my company has been using pg_standby as a replication solution for a
while and it has been working great for our needs. Unfortunately, about
once a month we get the following error on the standby bases:

vacuumdb: vacuuming of database "" failed: ERROR:  could not access
status of transaction 3625953267
DETAIL:  Could not open file "pg_clog/0D81": No such file or directory.

I've read your solutions to similar problems posted on this mailing list
but our situation is a bit different. First, we cannot lose any data so
creating a zero-filled clog is out of the question (we did try that
though, but the error came up again with older clogs missing). Second, I
don't believe this is a memory problem (as suggested in other posts)
because this happens on two different machines (we have one master and
two slaves). The important part is that the master database never had
this problem, it only happens on the slaves.

So my questions are:
1) how do the clogs relate to wal shipping based replication? Clearly
the master doesn't need that clog but the slave does.

2) could 'pg_clearxlogtail' in archive_command be a cause of this? This
is our archive_command:
'cat %p | pg_clearxlogtail | gzip -c | 
ssh slavehost "cat > /var/lib/postgresql/replication/in/%f"'

3) is there a faster way to debug this problem? Clogs fill slowly. It
takes about a month on a very busy production server for a clog to be
removed by master DB.


More info:
PostgreSQL 8.2.14 64-bit (though this happened in older versions also)
pg_standby from PostgreSQL 8.3.6


I'd be grateful for any ideas on this problem. If needed I can provide
more setup info.

-- 
Patryk Sidzina


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What is unsecure postgres languages? How to disable them?

2010-02-22 Thread dipti shah
Hi,

Could anyone please tell me what is unsecure postgres languages(like C,
pgperl, pgpython??). How to disable them or restrict them only for super
user?

Thanks,
Dipti


[GENERAL] helo

2010-02-22 Thread beulah prasanthi
Helo
 I am working on spring project with postgres 8.4
i wrote a function in postgrees which i am passing the argument email
email[] array
>From front end we need to insesrt data into that emailarray .so i used
java.arraylist.util
while i am running i got the following error Please help me

error:
org.postgresql.util.PSQLException: Cannot cast an instance of
java.util.ArrayList to type Types.ARRAY


[GENERAL] Questions regarding SET option.

2010-02-22 Thread Jignesh Shah
Hello All,

I have been writing a function with SECURITY DEFINER enabled. Basically, I
am looking for ways to override the users SET option settings while
executing my function to prevent the permissions breach. For example, to
override "SET search_path", I am setting search path in my function before
executing anything. Could any one please tell me what could be other SET
options that I should take care?

Moreover, how to revert back those settings just before returning from my
function?

Thanks, Jack


[GENERAL] TypeCast: util.list to array type

2010-02-22 Thread beulah prasanthi
Helo
 I am working on spring project with postgres 8.4
i wrote a function in postgrees which i am passing the argument email
email[] array
>From front end we need to insesrt data into that emailarray .so i used
java.arraylist.util
while i am running i got the following error Please help me

error:
org.postgresql.util.PSQLException: Cannot cast an instance of
java.util.ArrayList to type Types.ARRAY


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Pavel Stehule
Hello

you can overwrite standard settings only for function

CREATE [ OR REPLACE ] FUNCTION
name ( [ [ argmode ] [ argname ] argtype [ { DEFAULT | = }
default_expr ] [, ...] ] )
[ RETURNS rettype
  | RETURNS TABLE ( column_name column_type [, ...] ) ]
  { LANGUAGE lang_name
| WINDOW
| IMMUTABLE | STABLE | VOLATILE
| CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT
| [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
| COST execution_cost
| ROWS result_rows
| SET configuration_parameter { TO value | = value | FROM CURRENT } <<<===
| AS 'definition'
| AS 'obj_file', 'link_symbol'
  } ...
[ WITH ( attribute [, ...] ) ]

Regards
Pavel Stehule


2010/2/22 Jignesh Shah :
> Hello All,
>
> I have been writing a function with SECURITY DEFINER enabled. Basically, I
> am looking for ways to override the users SET option settings while
> executing my function to prevent the permissions breach. For example, to
> override "SET search_path", I am setting search path in my function before
> executing anything. Could any one please tell me what could be other SET
> options that I should take care?
>
> Moreover, how to revert back those settings just before returning from my
> function?
>
> Thanks, Jack

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is unsecure postgres languages? How to disable them?

2010-02-22 Thread Tino Wildenhain

Hi,

Am 22.02.2010 11:56, schrieb dipti shah:

Hi,
Could anyone please tell me what is unsecure postgres languages(like C,
pgperl, pgpython??). How to disable them or restrict them only for super
user?


They are already restricted for the super user because of their 
"insecure" nature. That means those languages allow you full access

to the system (and even some innards of postgresql) with the rights
of the postgresql process. You can remove the language handlers:

http://www.postgresql.org/docs/8.1/static/app-droplang.html

If you wish. Apart from that there is no more risk attached to them
unless you are super user or write insecure functions with them
then say with the copy command.

Regards
Tino Wildenhain



smime.p7s
Description: S/MIME Cryptographic Signature


[GENERAL] tsearch2 gives NOTICE: word is too long

2010-02-22 Thread AI Rumman
When I am using the query:

select length(description), to_tsvector('default',description) as c from
crmentity ;

Getting error:

NOTICE:  word is too long

Postgresql 8.1.

Could anyone please tell me why?


Re: [GENERAL] Asynchronous queries - processing listen (notify) in a procedural language

2010-02-22 Thread Merlin Moncure
On Sun, Feb 21, 2010 at 10:32 PM, Merlin Moncure  wrote:
> On Sun, Feb 21, 2010 at 9:22 PM, Tom Lane  wrote:
>> Merlin Moncure  writes:
>>> On Sat, Feb 20, 2010 at 9:38 PM, Petr Chmelar  wrote:
 Is there a way how to listen and trigger the notify messages in the
 database (+-)immediately and/or to execute additional (trigger) queries
 in other transactions?
>>
>>> The only way that I know of to send notify 'in-transaction' is via
>>> dblink...you just send 'notify x' as the query which commits and fires
>>> the action.  It doesn't make sense to do this if your outer
>>> transaction is very short in duration.
>>
>> It's not clear that it makes sense to do that in a long transaction,
>> either.  What are you notifying other sessions *about*?  Not your own
>> changes --- they won't be able to see those till you commit.  There's
>> a reason why NOTIFY is delayed till commit ...
>
> Heh...I almost mentioned this on the listen/notify thread.  There is
> actually a case for mid transaction notify that I rely on quite a bit:
> when you need to request information from some client that is attached
> to your database.  The database needs to signal the client and go get
> the information and return it, preferably _inside_ the notifying
> transaction so that you can have the information come back as a result
> to the function that set up the notification.  The way I currently do
> this currently is via dblink establish a receiving record that the
> client stores it's response data with and block for it in the
> transaction that set up the dblink,  Since it's read committed I can
> block and wait for the data or a timeout.
>
> With immediate notification and payloads, the dblink approach wouldn't
> be needed.  I could establish the receiving record, and notify the

small correction here.  I would have to have the client establish the
record at the ID of the notifier's choosing (probably nextval() on a
sequence) and then wait for it to be inserted.  Obviously, you can't
generate a record and wait around for it to be updated in the same
transaction.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgsql exception.

2010-02-22 Thread Ravi
Hi,
while passing util list to Postgres stored procedure getting  an exception
saying
*org.postgresql.util.PSQLException: Cannot cast an instance of
java.util.ArrayList to type Types.ARRAY*
 can you help to resolve it? its urgent

Method in Dao
public void savePerson(Person person, List addresses, List
emails, List numbers){
call.withProcedureName("
person_save1");

Map out = call.execute(new
PGPerson(person),addresses,null,null);

}

*SP in Database*

CREATE OR REPLACE FUNCTION person_save(person_to_save person, addresses
address[], emails email[], numbers phone[])
  RETURNS void AS
$BODY$
DECLARE
per_id bigint;
BEGIN
IF person_to_save.id



IS NULL OR person_to_save.id



= 0 THEN
SELECT INTO per_id nextval('people_id_seq');
INSERT INTO person (id, first_name,last_name,middle_name) values
(per_id, person_to_save.first_name, person_to_save.last_name,
person_to_save.middle_name);
ELSE
per_id := person_to_save.id



;
UPDATE person SET first_name=person_to_save.first_name,
last_name=person_to_save.last_name, middle_name=person_to_save.middle_name
WHERE id = person_to_save.id



;
END IF;
IF addresses IS NOT NULL THEN
EXECUTE person_addresses_save(per_id, addresses);
END IF;
IF emails IS NOT NULL THEN
EXECUTE person_emails_save(per_id, emails);
END IF;
IF numbers IS NOT NULL THEN
EXECUTE person_phones_save(per_id, numbers);
END IF;
END;$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;
ALTER FUNCTION person_save(person, address[], email[], phone[]) OWNER TO
postgres;
COMMENT ON FUNCTION person_save(person, address[], email[], phone[]) IS
'Saves a person.  If the person is new they will insert or if they exist
they will update.
Will do the same for the other objects also addresses, emails, phone';


*exception raised is

org.postgresql.util.PSQLException: Cannot cast an instance of
java.util.ArrayList to type Types.ARRAY*

-- 
Ravi.T
BayLogic Technologies India Pvt. Ltd.
Vizag.


[GENERAL] ECPG: No multidimensional array support for simple data types

2010-02-22 Thread mike stanton
Hello all, I get the following error message when ecpg precompiles an EXEC SQL 
INCLUDE on this variable:

short   cst_vent[MAX_SUC][12]; (MAX_SUC is defined as 24)

Mesage: 
 "No multidimensional array support for simple data types"

Is there a fix or am I stuck?
Version: PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.2.4


Michael Stanton W.


Re: [GENERAL] ECPG: No multidimensional array support for simple data types

2010-02-22 Thread Boszormenyi Zoltan
mike stanton írta:
> Hello all, I get the following error message when ecpg precompiles an
> EXEC SQL INCLUDE on this variable:
>  
> short   cst_vent[MAX_SUC][12]; (MAX_SUC is defined as 24)
>  
> Mesage:
>
>  "No multidimensional array support for simple data types"
>
> Is there a fix or am I stuck?
> Version: PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC gcc
> (GCC) 4.2.4

You're stuck. Only 1D arrays are supported for "simple" (int, double,
etc.) types.
"2D" character arrays are supported (actually, 1D array of strings) if
you use
char   *str[N];

Best regards,
Zoltán Böszörményi

-- 
Bible has answers for everything. Proof:
"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more
than these cometh of evil." (Matthew 5:37) - basics of digital technology.
"May your kingdom come" - superficial description of plate tectonics

--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Albe Laurenz
Jignesh Shah wrote:
> I have been writing a function with SECURITY DEFINER enabled. 
> Basically, I am looking for ways to override the users SET 
> option settings while executing my function to prevent the 
> permissions breach. For example, to override "SET 
> search_path", I am setting search path in my function before 
> executing anything. Could any one please tell me what could 
> be other SET options that I should take care?
>  
> Moreover, how to revert back those settings just before 
> returning from my function?

You can use the SET clause of CREATE FUNCTION which does exactly
what you want.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is unsecure postgres languages? How to disable them?

2010-02-22 Thread Albe Laurenz
dipti shah wrote:
> Could anyone please tell me what is unsecure postgres 
> languages(like C, pgperl, pgpython??). How to disable them or 
> restrict them only for super user?

I have never heard of "unsecure" languages - what exactly do you mean?

If you mean "untrusted" languages like PL/PerlU, they are not
installed by default *and* they are restricted to superusers.

You cannot disable C functions, but to define them you must also
have superuser privileges.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] text search in 8.1

2010-02-22 Thread Albe Laurenz
AI Rumman wrote:
> I have a plan to upgrade database, but right now I have to 
> use text search indexing for performance improvement.
>  
> Following is the rpm status of my server:
>  
> [r...@vcrmdev01 ~]# rpm -qa|grep postgres
> postgresql-8.1.11-1.el5_1.1
> postgresql-python-8.1.11-1.el5_1.1
> postgresql-server-8.1.11-1.el5_1.1
> postgresql-libs-8.1.11-1.el5_1.1
> postgresql-contrib-8.1.11-1.el5_1.1
> 
>  
> Please tell me how can I install textsearch here.

You already have installed it.

It is in the "contrib" package and is called "tsearch2".
Look at the documentation provided - you need to run an SQL
script to install it in the database.

I also recommend using PostgreSQL 8.3 or better where
full text search is in core. Otherwise your upgrade process
will become more difficult.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 gives NOTICE: word is too long

2010-02-22 Thread Albe Laurenz
AI Rumman wrote:
> When I am using the query:
>  
> select length(description), 
> to_tsvector('default',description) as c from crmentity ;
>  
> Getting error:
>  
> NOTICE:  word is too long
>  
> Postgresql 8.1.
>  
> Could anyone please tell me why?

Because there is a "word" in the "description" column that
exceeds the maximal length (4096 I think).

This is not an error, just an information that you will not
be able to search for this word.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] helo

2010-02-22 Thread Allan Kamau
On Mon, Feb 22, 2010 at 2:10 PM, beulah prasanthi  wrote:
> Helo
>  I am working on spring project with postgres 8.4
> i wrote a function in postgrees which i am passing the argument email
> email[] array
> From front end we need to insesrt data into that emailarray .so i used
> java.arraylist.util
> while i am running i got the following error Please help me
>
> error:
> org.postgresql.util.PSQLException: Cannot cast an instance of
> java.util.ArrayList to type Types.ARRAY


You have cross-posted.

You may want to compose a comma separated values string out of the
values you'd like to send as an array, Then send this string instead
of the array (ofcourse for your postgreSQL function, you will need to
change the datatype of your input variable to TEXT). Then inside your
postgres function unpack the CSV text into an array using some regular
expression(s).

maybe something like this.

Assuming "_my_CSV_string" is your text variable that contains the CSV data.

SELECT * FROM 
regexp_split_to_array(_my_CSV_string,E',(?=(?:[^\"]*\"[^\"]*\")*(?![^\"]*\"))'))AS
my_array;



Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Performance cost of a sort-merge join

2010-02-22 Thread Yang Zhang
Hi, I have the given tables and query, but I'm confused why the cost
of the query is so high. I've left it running over night. By
comparison, a "select * from metarelcloud_transactionlog order by
transactionid" takes on the order of seconds/minutes (at least in
MySQL). As far as I can tell, the complex query below basically just
consists of two sorts (three if PG doesn't in fact reuse the leaf
sorts). Why the enormous estimated page IO count (cost) on the second
sort? There are roughly 30 tuples per transactionid, so the join
shouldn't produce a vastly exploded dataset. Thanks in advance.

tpcc=# \d metarelcloud_graph
  Table "public.metarelcloud_graph"
  Column  | Type  | Modifiers
--+---+---
 tableid1 | character varying(20) | not null
 tupleid1 | integer   | not null
 tableid2 | character varying(20) | not null
 tupleid2 | integer   | not null
 node1| integer   | not null
 node2| integer   | not null
 weight   | integer   | not null
Indexes:
"metarelcloud_graph_pkey" PRIMARY KEY, btree (tableid1, tupleid1,
tableid2, tupleid2)

tpcc=# \d metarelcloud_transactionlog
   Table
"public.metarelcloud_transactionlog"
   Column| Type  |
   Modifiers
-+---+--
 id  | integer   | not null default
nextval('metarelcloud_transactionlog_id_seq'::regclass)
 transactionid   | integer   | not null
 queryid | smallint  | not null
 tableid | character varying(30) | not null
 tupleid | integer   | not null
 querytype   | character varying | not null
 graphpartition  | smallint  |
 replicatedpartition | smallint  |
 justifiedpartition  | smallint  |
 hashpartition   | smallint  |
 nodeid  | integer   |
 manualpartition | smallint  |
Indexes:
"metarelcloud_transactionlog_pkey" PRIMARY KEY, btree (id)
Check constraints:
"metarelcloud_transactionlog_graphpartition_check" CHECK
(graphpartition >= 0)
"metarelcloud_transactionlog_hashpartition_check" CHECK (hashpartition >= 0)
"metarelcloud_transactionlog_justifiedpartition_check" CHECK
(justifiedpartition >= 0)
"metarelcloud_transactionlog_manualpartition_check" CHECK
(manualpartition >= 0)
"metarelcloud_transactionlog_querytype_check" CHECK
(querytype::text = ANY (ARRAY['select'::character varying,
'insert'::character varying, 'delete'::character varying,
'update'::character varying]::text[]))
"metarelcloud_transactionlog_replicatedpartition_check" CHECK
(replicatedpartition >= 0)

tpcc=# analyze metarelcloud_transactionlog;
ANALYZE

tpcc=# explain insert into metarelcloud_graph (node1, node2, tableid1,
tupleid1, tableid2, tupleid2, weight)
select 0, 0, a.tableid, a.tupleid, b.tableid, b.tupleid, count(*)
from metarelcloud_transactionlog a, metarelcloud_transactionlog b
where a.transactionid = b.transactionid
  and (a.tableid, a.tupleid) < (b.tableid, b.tupleid)
group by a.tableid, a.tupleid, b.tableid, b.tupleid;
  QUERY PLAN
--
 Subquery Scan "*SELECT*"  (cost=968062444010.30..1088362355018.20
rows=2673331355731 width=180)
   ->  GroupAggregate  (cost=968062444010.30..1041579056292.91
rows=2673331355731 width=26)
 ->  Sort  (cost=968062444010.30..974745772399.63
rows=2673331355731 width=26)
   Sort Key: a.tableid, a.tupleid, b.tableid, b.tupleid
   ->  Merge Join  (cost=16817274.69..160416950669.79
rows=2673331355731 width=26)
 Merge Cond: (a.transactionid = b.transactionid)
 Join Filter: (ROW((a.tableid)::text, a.tupleid) <
ROW((b.tableid)::text, b.tupleid))
 ->  Sort  (cost=8408637.34..8534662.95
rows=50410244 width=17)
   Sort Key: a.transactionid
   ->  Seq Scan on metarelcloud_transactionlog
a  (cost=0.00..925543.44 rows=50410244 width=17)
 ->  Materialize  (cost=8408637.34..9038765.39
rows=50410244 width=17)
   ->  Sort  (cost=8408637.34..8534662.95
rows=50410244 width=17)
 Sort Key: b.transactionid
 ->  Seq Scan on
metarelcloud_transactionlog b  (cost=0.00..925543.44 rows=50410244
width=17)
(14 rows)

-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance cost of a sort-merge join

2010-02-22 Thread Tom Lane
Yang Zhang  writes:
> Hi, I have the given tables and query, but I'm confused why the cost
> of the query is so high.

The reason the estimated cost is so high is that the estimated number of
rows out of the join is enormous.  It's going to take awhile.  One
question worth asking is what you've got work_mem set to.  Another
possible problem is that you're doing a lot of sorts/comparisons on
varchars, which could be pretty expensive if you're using a non-C locale.

> I've left it running over night. By
> comparison, a "select * from metarelcloud_transactionlog order by
> transactionid" takes on the order of seconds/minutes (at least in
> MySQL).

That's got approximately nothing to do with this query.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Un successful Restoration of DATA using WAL files

2010-02-22 Thread Bruce Momjian

Based on your suggestions, I have applied the attached patch to mention
_not_ to use pg_dump or pg_dumpall in two places, and to briefly explain
why.  Thanks.

---

Craig Ringer wrote:
> On Mon, 2009-10-19 at 07:18 -0700, Mitesh51 wrote:
> > I am unable to restore data with the use of WAL files by following 
> > procedure.
> > 
> > I have done following changes in postgres.conf to enable WAL archiving...
> > 
> > archive_mode = on   # allows archiving to be done
> > archive_command = 'copy "%p" "C:\\archivedir\\%f"'  
> > 
> > I have one database(built in) postgres. I create one table student in it.
> > and take full backup(only of a single database & I am not copying complete
> > data dir..) @ 12:40 pm with the 
> > 
> > pg_dump.exe -h localhost -p 5432 -U postgres -f C:\test.sql postgres
> 
> Any comments from readers on the following suggestion of changes to the
> PITR docs at:
> 
> http://www.postgresql.org/docs/current/interactive/continuous-archiving.html
> 
> 
> User misunderstandings on the pgsql-general mailing list suggest
> that a clear and prominent warning needs to be added to this
> page to prevent people from trying to combine a pg_dump base
> backup with WAL archiving. People are failing to understand that
> the base backup must be a file-system-level copy taken after
> calling pg_start_backup() .
> 
> Suggested changes:
> 
> "maintains a write ahead log" -> "maintains a block-level write
> ahead log" in the first paragraph.
> 
> "we can combine a file-system-level backup" -> 
> "we can combine a file-system-level backup of the data directory
> (not a pg_dump backup)"
> 
> Also, somewhere in the introductory section, something like this
> would be good:
> 
> "IMPORTANT: WAL archiving and PITR cannot be used with an
> SQL-level base backup taken with pg_dump. See "Making a Base
> Backup" below for the correct method of backing up your database
> for WAL archiving and PITR. See "Caveats" for details."
> 
> In "Caveats":
> 
> "You can't use pg_dump to make a base backup, restore that to a
> different cluster or a new database in the original cluster,
> then apply archived WAL files to it. WAL archiving records a
> history of changes to the database cluster at a very low level,
> rather than recording anything like SQL commands. The WAL files
> essentially record sequences of writes of byte sequences to
> offsets within files in the cluster. A new cluster (or a new
> database created in the original cluster by CREATE DATABASE)
> will have a different layout in its files, so the WAL archives
> don't make any sense to apply to the new cluster."
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/backup.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/backup.sgml,v
retrieving revision 2.143
diff -c -c -r2.143 backup.sgml
*** doc/src/sgml/backup.sgml	22 Feb 2010 11:47:30 -	2.143
--- doc/src/sgml/backup.sgml	22 Feb 2010 17:14:02 -
***
*** 484,489 
--- 484,499 


  
+   
+
+ pg_dump and
+ pg_dumpall do not produce file-system-level
+ backups and cannot be used as part of a continuous-archiving solution.
+ Such dumps are logical and do not contain enough
+ information to used by WAL reply.
+
+   
+ 

 As with the plain file-system-backup technique, this method can only
 support restoration of an entire database cluster, not a subset.
***
*** 744,750 
 
  
   Perform the backup, using any convenient file-system-backup tool
!  such as tar or cpio.  It is neither
   necessary nor desirable to stop normal operation of the database
   while you do this.
  
--- 754,762 
 
  
   Perform the backup, using any convenient file-system-backup tool
!  such as tar or cpio (not
!  pg_dump or
!  pg_dumpall).  It is neither
   necessary nor desirable to stop normal operation of the database
   while you do this.
  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] helo

2010-02-22 Thread Kris Jurka



On Mon, 22 Feb 2010, beulah prasanthi wrote:


Helo
 I am working on spring project with postgres 8.4
i wrote a function in postgrees which i am passing the argument email
email[] array
From front end we need to insesrt data into that emailarray .so i used
java.arraylist.util
while i am running i got the following error Please help me

error: 
org.postgresql.util.PSQLException: Cannot cast an instance of
java.util.ArrayList to type Types.ARRAY



You need to pass an instance of java.sql.Array, not a Collection or 
something else that is array like.


Also please only report an issue once rather than a half dozen times.

Kris Jurka
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is unsecure postgres languages? How to disable them?

2010-02-22 Thread dipti shah
Sorry Albe for confusion. Yes, I meant untrusted languages like C, PL/PerlU,
PL/PythonU etc...

Thanks a lot you and Tino for nice reply. Could you guys tell me how could I
verify whether those languages are installed on my PostGreSQL server?

Thanks for being there,
Dipti

On Mon, Feb 22, 2010 at 8:14 PM, Albe Laurenz wrote:

> dipti shah wrote:
> > Could anyone please tell me what is unsecure postgres
> > languages(like C, pgperl, pgpython??). How to disable them or
> > restrict them only for super user?
>
> I have never heard of "unsecure" languages - what exactly do you mean?
>
> If you mean "untrusted" languages like PL/PerlU, they are not
> installed by default *and* they are restricted to superusers.
>
> You cannot disable C functions, but to define them you must also
> have superuser privileges.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] [BUGS] helo

2010-02-22 Thread Kevin Grittner
Kris Jurka  wrote:
 
> You need to pass an instance of java.sql.Array, not a Collection
> or something else that is array like.
 
Ah, right.  After extracting an array from the java.util.ArrayList,
it needs to be turned into a java.sql.Array using the
Connection.createArrayOf method.
 
Most definitely not a PostgreSQL bug.
 
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance cost of a sort-merge join

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 12:10 PM, Tom Lane  wrote:
>> I've left it running over night. By
>> comparison, a "select * from metarelcloud_transactionlog order by
>> transactionid" takes on the order of seconds/minutes (at least in
>> MySQL).
>
> That's got approximately nothing to do with this query.

Isn't that exactly what the leaf sorts are doing? By comparison,
"select * from metarelcloud_transactionlog order by transactionid"
takes much, much longer in PG (it's been running for 68 minutes now,
and still going, whereas MySQL took 6 minutes).
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Jignesh Shah
Thanks a ton Laurenz and Pavel for your responses but I really didn't follow
you. I am not master in PostGreSQL yet. Could you please give me some
example?

Basically, I want to know how many such SET options I should reset before
executing my function and at the end it should also be restored to original
settings.

It would be really helpful if you could elaborate your response.

Thanks guys.
Jack

On Mon, Feb 22, 2010 at 8:05 PM, Albe Laurenz wrote:

> Jignesh Shah wrote:
> > I have been writing a function with SECURITY DEFINER enabled.
> > Basically, I am looking for ways to override the users SET
> > option settings while executing my function to prevent the
> > permissions breach. For example, to override "SET
> > search_path", I am setting search path in my function before
> > executing anything. Could any one please tell me what could
> > be other SET options that I should take care?
> >
> > Moreover, how to revert back those settings just before
> > returning from my function?
>
> You can use the SET clause of CREATE FUNCTION which does exactly
> what you want.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] What is unsecure postgres languages? How to disable them?

2010-02-22 Thread Scott Bailey

dipti shah wrote:
Sorry Albe for confusion. Yes, I meant untrusted languages like C, 
PL/PerlU, PL/PythonU etc...


Thanks a lot you and Tino for nice reply. Could you guys tell me how 
could I verify whether those languages are installed on my PostGreSQL 
server?


Thanks for being there,
Dipti

On Mon, Feb 22, 2010 at 8:14 PM, Albe Laurenz > wrote:


dipti shah wrote:
 > Could anyone please tell me what is unsecure postgres
 > languages(like C, pgperl, pgpython??). How to disable them or
 > restrict them only for super user?

I have never heard of "unsecure" languages - what exactly do you mean?

If you mean "untrusted" languages like PL/PerlU, they are not
installed by default *and* they are restricted to superusers.

You cannot disable C functions, but to define them you must also
have superuser privileges.

Yours,
Laurenz Albe



As mentioned you can't remove c/internal and for the others, they are 
not installed unless you went out of your way to install them.


SELECT *
FROM pg_catalog.pg_language
WHERE lanpltrusted = false

Scott

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
I have the exact same table of data in both MySQL and Postgresql. In Postgresql:

tpcc=# \d metarelcloud_transactionlog
   Table
"public.metarelcloud_transactionlog"
   Column| Type  |
   Modifiers
-+---+--
 id  | integer   | not null default
nextval('metarelcloud_transactionlog_id_seq'::regclass)
 transactionid   | integer   | not null
 queryid | smallint  | not null
 tableid | character varying(30) | not null
 tupleid | integer   | not null
 querytype   | character varying | not null
 graphpartition  | smallint  |
 replicatedpartition | smallint  |
 justifiedpartition  | smallint  |
 hashpartition   | smallint  |
 nodeid  | integer   |
 manualpartition | smallint  |
Indexes:
"metarelcloud_transactionlog_pkey" PRIMARY KEY, btree (id)
Check constraints:
"metarelcloud_transactionlog_graphpartition_check" CHECK
(graphpartition >= 0)
"metarelcloud_transactionlog_hashpartition_check" CHECK (hashpartition >= 0)
"metarelcloud_transactionlog_justifiedpartition_check" CHECK
(justifiedpartition >= 0)
"metarelcloud_transactionlog_manualpartition_check" CHECK
(manualpartition >= 0)
"metarelcloud_transactionlog_querytype_check" CHECK
(querytype::text = ANY (ARRAY['select'::character varying,
'insert'::character varying, 'delete'::character varying,
'update'::character varying]::text[]))
"metarelcloud_transactionlog_replicatedpartition_check" CHECK
(replicatedpartition >= 0)

In MySQL:

CREATE TABLE `metarelcloud_transactionlog` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `transactionid` int(11) NOT NULL,
  `queryid` tinyint(4) NOT NULL,
  `tableid` varchar(30) NOT NULL,
  `tupleid` int(11) NOT NULL,
  `querytype` enum('select','insert','delete','update') NOT NULL,
  `graphpartition` tinyint(3) unsigned DEFAULT NULL,
  `replicatedpartition` tinyint(3) unsigned DEFAULT NULL,
  `justifiedpartition` tinyint(3) unsigned DEFAULT NULL,
  `hashpartition` tinyint(3) unsigned DEFAULT NULL,
  `nodeid` int(11) DEFAULT NULL,
  `manualpartition` tinyint(3) unsigned DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `transactionid` (`transactionid`),
  KEY `tableid` (`tableid`,`tupleid`),
  KEY `nodeid` (`nodeid`)
) ENGINE=MyISAM AUTO_INCREMENT=50410166 DEFAULT CHARSET=latin1

I'm running:

  select * from metarelcloud_transactionlog order by transactionid;

It takes MySQL 6 minutes, but Postgresql is still running after 70
minutes. Is there something like a glaring misconfiguration that I'm
overlooking? Thanks in advance.
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Pavel Stehule
2010/2/22 Jignesh Shah :
> Thanks a ton Laurenz and Pavel for your responses but I really didn't follow
> you. I am not master in PostGreSQL yet. Could you please give me some
> example?
>
> Basically, I want to know how many such SET options I should reset before
> executing my function and at the end it should also be restored to original
> settings.
>

create or replace function foop()
 returns int as $$
select 10
$$ language sql
set work_mem to '1MB'
set search_path = 'public';
CREATE FUNCTION
postgres=#

regards
Pavel Stehule

> It would be really helpful if you could elaborate your response.
>
> Thanks guys.
> Jack
>
> On Mon, Feb 22, 2010 at 8:05 PM, Albe Laurenz 
> wrote:
>>
>> Jignesh Shah wrote:
>> > I have been writing a function with SECURITY DEFINER enabled.
>> > Basically, I am looking for ways to override the users SET
>> > option settings while executing my function to prevent the
>> > permissions breach. For example, to override "SET
>> > search_path", I am setting search path in my function before
>> > executing anything. Could any one please tell me what could
>> > be other SET options that I should take care?
>> >
>> > Moreover, how to revert back those settings just before
>> > returning from my function?
>>
>> You can use the SET clause of CREATE FUNCTION which does exactly
>> what you want.
>>
>> Yours,
>> Laurenz Albe
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SET Role doesn't work from Security Definer Function...

2010-02-22 Thread dipti shah
Hi,

I have just noticed that "SET ROLE" doesn't work from security definer
function. I don;t know why but it clearly gives the error that SET role
doesn;t work in security definer context.

Basically, I am trying to write a store procedure which creates a table
asked by user along with other associated logging tables and event tables
automatically. I want to make sure that when users use my stored procedure
to create table, they should be allowed only if they have permission to do
so.

If I create function in postgres user with Security Definer enabled, it will
allow to create any table with any foreign references etc...So I am setting
role to current_user in my function and then creating a table to make sure
that user has the appropriate privilege.

Since, SET Role is failing in security definer context, I am helpless now.
Could anyone suggest any workaround or solution to this issue. Is this is
known issue? Anybody already encountered it?

Thanks,
Dipti


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Pavel Stehule
hello

the speed depends on setting of working_memory. Try to increase a working_memory

set working_memory to '10MB';

Regards
Pavel Stehule

2010/2/22 Yang Zhang :
> I have the exact same table of data in both MySQL and Postgresql. In 
> Postgresql:
>
> tpcc=# \d metarelcloud_transactionlog
>                                       Table
> "public.metarelcloud_transactionlog"
>       Column        |         Type          |
>       Modifiers
> -+---+--
>  id                  | integer               | not null default
> nextval('metarelcloud_transactionlog_id_seq'::regclass)
>  transactionid       | integer               | not null
>  queryid             | smallint              | not null
>  tableid             | character varying(30) | not null
>  tupleid             | integer               | not null
>  querytype           | character varying     | not null
>  graphpartition      | smallint              |
>  replicatedpartition | smallint              |
>  justifiedpartition  | smallint              |
>  hashpartition       | smallint              |
>  nodeid              | integer               |
>  manualpartition     | smallint              |
> Indexes:
>    "metarelcloud_transactionlog_pkey" PRIMARY KEY, btree (id)
> Check constraints:
>    "metarelcloud_transactionlog_graphpartition_check" CHECK
> (graphpartition >= 0)
>    "metarelcloud_transactionlog_hashpartition_check" CHECK (hashpartition >= 
> 0)
>    "metarelcloud_transactionlog_justifiedpartition_check" CHECK
> (justifiedpartition >= 0)
>    "metarelcloud_transactionlog_manualpartition_check" CHECK
> (manualpartition >= 0)
>    "metarelcloud_transactionlog_querytype_check" CHECK
> (querytype::text = ANY (ARRAY['select'::character varying,
> 'insert'::character varying, 'delete'::character varying,
> 'update'::character varying]::text[]))
>    "metarelcloud_transactionlog_replicatedpartition_check" CHECK
> (replicatedpartition >= 0)
>
> In MySQL:
>
> CREATE TABLE `metarelcloud_transactionlog` (
>  `id` int(11) NOT NULL AUTO_INCREMENT,
>  `transactionid` int(11) NOT NULL,
>  `queryid` tinyint(4) NOT NULL,
>  `tableid` varchar(30) NOT NULL,
>  `tupleid` int(11) NOT NULL,
>  `querytype` enum('select','insert','delete','update') NOT NULL,
>  `graphpartition` tinyint(3) unsigned DEFAULT NULL,
>  `replicatedpartition` tinyint(3) unsigned DEFAULT NULL,
>  `justifiedpartition` tinyint(3) unsigned DEFAULT NULL,
>  `hashpartition` tinyint(3) unsigned DEFAULT NULL,
>  `nodeid` int(11) DEFAULT NULL,
>  `manualpartition` tinyint(3) unsigned DEFAULT NULL,
>  PRIMARY KEY (`id`),
>  KEY `transactionid` (`transactionid`),
>  KEY `tableid` (`tableid`,`tupleid`),
>  KEY `nodeid` (`nodeid`)
> ) ENGINE=MyISAM AUTO_INCREMENT=50410166 DEFAULT CHARSET=latin1
>
> I'm running:
>
>  select * from metarelcloud_transactionlog order by transactionid;
>
> It takes MySQL 6 minutes, but Postgresql is still running after 70
> minutes. Is there something like a glaring misconfiguration that I'm
> overlooking? Thanks in advance.
> --
> Yang Zhang
> http://www.mit.edu/~y_z/
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Complex SELECT Statement help needed

2010-02-22 Thread flashbangpop

I'm new to postgres and am trying to display rows in a table that are have a
value between 2 fields. 

The rows I want to display should be selected by the grademin or grademax
fields. I'd like to display the rows in sections that are labeled Grades
3-5, 6-8, and 9-12.

The problem that I am having is I can't figure out how to write a statement
that will check both fields. And, especially when I have values in the like
this:

grademin 4 grademax 12 that I want to show up in all levels
grademin 5 grademax 12 "" 
grademin 6 grademax 9 in 6-9 and 9-12

Below is what I have so far, but it falls flat on it's face when trying to
display rows that don't fall between the specific range. 

How can I check grademin and max in the same statement?


$query = "SELECT
lessonwebid,lessontitle,gradelevel,grademin,grademax,reviewedby FROM
lessonplans WHERE grademin >= 3 OR grademin <= 5";

$query = "SELECT
lessonwebid,lessontitle,gradelevel,grademin,grademax,reviewedby FROM
lessonplans WHERE grademin >= 6 OR grademin <= 8";

$query = "SELECT
lessonwebid,lessontitle,gradelevel,grademin,grademax,reviewedby FROM
lessonplans WHERE grademin >= 9 OR grademin <= 12";

Thanks ahead for any suggestions.
-- 
View this message in context: 
http://old.nabble.com/Complex-SELECT-Statement-help-needed-tp27691760p27691760.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Frank Heikens
There is no index on the column transactionid in your PostgreSQL- 
table, as there is in your MySQL-table. This explains the difference.


CREATE INDEX i_transactionid ON public.metarelcloud_transactionlog  
(transactionid);



Op 22 feb 2010, om 19:10 heeft Yang Zhang het volgende geschreven:

I have the exact same table of data in both MySQL and Postgresql. In  
Postgresql:


tpcc=# \d metarelcloud_transactionlog
  Table
"public.metarelcloud_transactionlog"
  Column| Type  |
  Modifiers
-+--- 
+ 
--

id  | integer   | not null default
nextval('metarelcloud_transactionlog_id_seq'::regclass)
transactionid   | integer   | not null
queryid | smallint  | not null
tableid | character varying(30) | not null
tupleid | integer   | not null
querytype   | character varying | not null
graphpartition  | smallint  |
replicatedpartition | smallint  |
justifiedpartition  | smallint  |
hashpartition   | smallint  |
nodeid  | integer   |
manualpartition | smallint  |
Indexes:
   "metarelcloud_transactionlog_pkey" PRIMARY KEY, btree (id)
Check constraints:
   "metarelcloud_transactionlog_graphpartition_check" CHECK
(graphpartition >= 0)
   "metarelcloud_transactionlog_hashpartition_check" CHECK  
(hashpartition >= 0)

   "metarelcloud_transactionlog_justifiedpartition_check" CHECK
(justifiedpartition >= 0)
   "metarelcloud_transactionlog_manualpartition_check" CHECK
(manualpartition >= 0)
   "metarelcloud_transactionlog_querytype_check" CHECK
(querytype::text = ANY (ARRAY['select'::character varying,
'insert'::character varying, 'delete'::character varying,
'update'::character varying]::text[]))
   "metarelcloud_transactionlog_replicatedpartition_check" CHECK
(replicatedpartition >= 0)

In MySQL:

CREATE TABLE `metarelcloud_transactionlog` (
 `id` int(11) NOT NULL AUTO_INCREMENT,
 `transactionid` int(11) NOT NULL,
 `queryid` tinyint(4) NOT NULL,
 `tableid` varchar(30) NOT NULL,
 `tupleid` int(11) NOT NULL,
 `querytype` enum('select','insert','delete','update') NOT NULL,
 `graphpartition` tinyint(3) unsigned DEFAULT NULL,
 `replicatedpartition` tinyint(3) unsigned DEFAULT NULL,
 `justifiedpartition` tinyint(3) unsigned DEFAULT NULL,
 `hashpartition` tinyint(3) unsigned DEFAULT NULL,
 `nodeid` int(11) DEFAULT NULL,
 `manualpartition` tinyint(3) unsigned DEFAULT NULL,
 PRIMARY KEY (`id`),
 KEY `transactionid` (`transactionid`),
 KEY `tableid` (`tableid`,`tupleid`),
 KEY `nodeid` (`nodeid`)
) ENGINE=MyISAM AUTO_INCREMENT=50410166 DEFAULT CHARSET=latin1

I'm running:

 select * from metarelcloud_transactionlog order by transactionid;

It takes MySQL 6 minutes, but Postgresql is still running after 70
minutes. Is there something like a glaring misconfiguration that I'm
overlooking? Thanks in advance.
--
Yang Zhang
http://www.mit.edu/~y_z/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Frank Heikens
frankheik...@mac.com




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex SELECT Statement help needed

2010-02-22 Thread Raymond O'Donnell
On 22/02/2010 18:14, flashbangpop wrote:

> I'm new to postgres and am trying to display rows in a table that are have a
> value between 2 fields. 
> 
> The rows I want to display should be selected by the grademin or grademax
> fields. I'd like to display the rows in sections that are labeled Grades
> 3-5, 6-8, and 9-12.
> 
> The problem that I am having is I can't figure out how to write a statement
> that will check both fields. And, especially when I have values in the like
> this:
> 
> grademin 4 grademax 12 that I want to show up in all levels
> grademin 5 grademax 12 "" 
> grademin 6 grademax 9 in 6-9 and 9-12
> 
> Below is what I have so far, but it falls flat on it's face when trying to
> display rows that don't fall between the specific range. 
> 
> How can I check grademin and max in the same statement?

Are grademin and grademax two separate columns in the table? If so,
simply imcude as many conditions as you need:

  select 
  where grademin ...
  and grademax 
  and 


Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Richard Broersma
On Mon, Feb 22, 2010 at 10:17 AM, Frank Heikens  wrote:

> There is no index on the column transactionid in your PostgreSQL-table, as
> there is in your MySQL-table. This explains the difference.
>
> CREATE INDEX i_transactionid ON public.metarelcloud_transactionlog
> (transactionid);

Does an index help a sort operation in PostgreSQL?


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 1:13 PM, Pavel Stehule  wrote:
> hello
>
> the speed depends on setting of working_memory. Try to increase a 
> working_memory
>
> set working_memory to '10MB';

It's already at

tpcc=# show work_mem;
 work_mem
--
 2kB
(1 row)

I also wouldn't have imagined an external merge-sort as being very
memory-intensive--wouldn't it only enough buffer space to read 2x and
write 1x in big-enough chunks for mostly-sequential access?
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 1:30 PM, Richard Broersma
 wrote:
> On Mon, Feb 22, 2010 at 10:17 AM, Frank Heikens  wrote:
>
>> There is no index on the column transactionid in your PostgreSQL-table, as
>> there is in your MySQL-table. This explains the difference.
>>
>> CREATE INDEX i_transactionid ON public.metarelcloud_transactionlog
>> (transactionid);
>
> Does an index help a sort operation in PostgreSQL?

I also share the same doubt. An external merge-sort needs to make
complete passes over the entire dataset, with no index-directed
accesses.
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Jignesh Shah
>> set work_mem to '1MB'
>> set search_path = 'public';

Thanks for the example Pavel. I understood it. Are there any other SET
options except above that I need to set to prevent security breach?

Thanks,
Jack

On Mon, Feb 22, 2010 at 11:41 PM, Pavel Stehule wrote:

> 2010/2/22 Jignesh Shah :
> > Thanks a ton Laurenz and Pavel for your responses but I really didn't
> follow
> > you. I am not master in PostGreSQL yet. Could you please give me some
> > example?
> >
> > Basically, I want to know how many such SET options I should reset before
> > executing my function and at the end it should also be restored to
> original
> > settings.
> >
>
> create or replace function foop()
>  returns int as $$
> select 10
> $$ language sql
> set work_mem to '1MB'
> set search_path = 'public';
> CREATE FUNCTION
> postgres=#
>
> regards
> Pavel Stehule
>
> > It would be really helpful if you could elaborate your response.
> >
> > Thanks guys.
> > Jack
> >
> > On Mon, Feb 22, 2010 at 8:05 PM, Albe Laurenz 
> > wrote:
> >>
> >> Jignesh Shah wrote:
> >> > I have been writing a function with SECURITY DEFINER enabled.
> >> > Basically, I am looking for ways to override the users SET
> >> > option settings while executing my function to prevent the
> >> > permissions breach. For example, to override "SET
> >> > search_path", I am setting search path in my function before
> >> > executing anything. Could any one please tell me what could
> >> > be other SET options that I should take care?
> >> >
> >> > Moreover, how to revert back those settings just before
> >> > returning from my function?
> >>
> >> You can use the SET clause of CREATE FUNCTION which does exactly
> >> what you want.
> >>
> >> Yours,
> >> Laurenz Albe
> >
> >
>


Re: [GENERAL] Questions regarding SET option.

2010-02-22 Thread Pavel Stehule
2010/2/22 Jignesh Shah :
>>> set work_mem to '1MB'
>>> set search_path = 'public';
>
> Thanks for the example Pavel. I understood it. Are there any other SET
> options except above that I need to set to prevent security breach?
>

I am not sure - I know only search_path

Pavel

> Thanks,
> Jack
>
> On Mon, Feb 22, 2010 at 11:41 PM, Pavel Stehule 
> wrote:
>>
>> 2010/2/22 Jignesh Shah :
>> > Thanks a ton Laurenz and Pavel for your responses but I really didn't
>> > follow
>> > you. I am not master in PostGreSQL yet. Could you please give me some
>> > example?
>> >
>> > Basically, I want to know how many such SET options I should reset
>> > before
>> > executing my function and at the end it should also be restored to
>> > original
>> > settings.
>> >
>>
>> create or replace function foop()
>>  returns int as $$
>> select 10
>> $$ language sql
>> set work_mem to '1MB'
>> set search_path = 'public';
>> CREATE FUNCTION
>> postgres=#
>>
>> regards
>> Pavel Stehule
>>
>> > It would be really helpful if you could elaborate your response.
>> >
>> > Thanks guys.
>> > Jack
>> >
>> > On Mon, Feb 22, 2010 at 8:05 PM, Albe Laurenz 
>> > wrote:
>> >>
>> >> Jignesh Shah wrote:
>> >> > I have been writing a function with SECURITY DEFINER enabled.
>> >> > Basically, I am looking for ways to override the users SET
>> >> > option settings while executing my function to prevent the
>> >> > permissions breach. For example, to override "SET
>> >> > search_path", I am setting search path in my function before
>> >> > executing anything. Could any one please tell me what could
>> >> > be other SET options that I should take care?
>> >> >
>> >> > Moreover, how to revert back those settings just before
>> >> > returning from my function?
>> >>
>> >> You can use the SET clause of CREATE FUNCTION which does exactly
>> >> what you want.
>> >>
>> >> Yours,
>> >> Laurenz Albe
>> >
>> >
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Frank Heikens


Op 22 feb 2010, om 19:30 heeft Richard Broersma het volgende geschreven:

On Mon, Feb 22, 2010 at 10:17 AM, Frank Heikens  
 wrote:


There is no index on the column transactionid in your PostgreSQL- 
table, as

there is in your MySQL-table. This explains the difference.

CREATE INDEX i_transactionid ON public.metarelcloud_transactionlog
(transactionid);


Does an index help a sort operation in PostgreSQL?


Yes it does, see the manual: 
http://www.postgresql.org/docs/8.4/interactive/indexes-ordering.html

Example without index:
"Sort  (cost=804.39..829.39 rows=1 width=4) (actual  
time=16.006..17.171 rows=1 loops=1)"

"  Sort Key: bar"
"  Sort Method:  quicksort  Memory: 491kB"
"  ->  Seq Scan on bla  (cost=0.00..140.00 rows=1 width=4) (actual  
time=0.015..2.236 rows=1 loops=1)"

"Total runtime: 18.098 ms"

Same query with index (btree):
"Index Scan Backward using i_bar on bla  (cost=0.00..406.25 rows=1  
width=4) (actual time=0.093..4.408 rows=1 loops=1)"

"Total runtime: 5.381 ms"




--
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug



Regards,
Frank Heikens




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Alban Hertroys
On 22 Feb 2010, at 19:35, Yang Zhang wrote:

> I also wouldn't have imagined an external merge-sort as being very


Where's that external merge-sort coming from? Can you show an explain analyze?

If your work-mem is too low there's a good chance that Postgres has to use your 
disks for sorting, which will obviously be quite slow.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4b82d18510442035320951!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Alvaro Herrera
Yang Zhang escribió:

> I'm running:
> 
>   select * from metarelcloud_transactionlog order by transactionid;
> 
> It takes MySQL 6 minutes, but Postgresql is still running after 70
> minutes. Is there something like a glaring misconfiguration that I'm
> overlooking? Thanks in advance.

How large is the table, and have you vacuumed it?  Did you analyze it?
What Pg version is this?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 1:48 PM, Alban Hertroys
 wrote:
> On 22 Feb 2010, at 19:35, Yang Zhang wrote:
>
>> I also wouldn't have imagined an external merge-sort as being very
>
>
> Where's that external merge-sort coming from? Can you show an explain analyze?

I just assumed that the "Sort" in the EXPLAIN output meant an external
merge-sort, given that the table has over 50 million tuples and is
over 3GB, *and* there is no index on the sort key:

tpcc=# explain select * from metarelcloud_transactionlog order by transactionid;
   QUERY PLAN
-
 Sort  (cost=8408637.34..8534662.95 rows=50410244 width=17)
   Sort Key: a.transactionid
   ->  Seq Scan on metarelcloud_transactionlog a
(cost=0.00..925543.44 rows=50410244 width=17)
(3 rows)

Anyway, I added the INDEX as suggested by Frank, but it's been 20
minutes and it's still running. With the index, EXPLAIN says:

tpcc=# explain select * from metarelcloud_transactionlog order by transactionid;
   QUERY PLAN
-
 Index Scan using i_transactionid on metarelcloud_transactionlog
(cost=0.00..4453076.81 rows=50410164 width=44)
(1 row)

> If your work-mem is too low there's a good chance that Postgres has to use 
> your disks for sorting, which will obviously be quite slow.

Relative to the non-terminating 80-minute-so-far sort, Unix sort runs
much faster (on the order of several minutes).
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 2:03 PM, Alvaro Herrera
 wrote:
> Yang Zhang escribió:
>
>> I'm running:
>>
>>   select * from metarelcloud_transactionlog order by transactionid;
>>
>> It takes MySQL 6 minutes, but Postgresql is still running after 70
>> minutes. Is there something like a glaring misconfiguration that I'm
>> overlooking? Thanks in advance.
>
> How large is the table, and have you vacuumed it?  Did you analyze it?
> What Pg version is this?

The table has 50.4M tuples. It's been vacuumed and analyzed. I'm using
PG 8.3.8 on Fedora 10 x86_64.
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Frank Heikens


Op 22 feb 2010, om 20:07 heeft Yang Zhang het volgende geschreven:


On Mon, Feb 22, 2010 at 1:48 PM, Alban Hertroys
 wrote:

On 22 Feb 2010, at 19:35, Yang Zhang wrote:


I also wouldn't have imagined an external merge-sort as being very



Where's that external merge-sort coming from? Can you show an  
explain analyze?


I just assumed that the "Sort" in the EXPLAIN output meant an external
merge-sort, given that the table has over 50 million tuples and is
over 3GB, *and* there is no index on the sort key:

tpcc=# explain select * from metarelcloud_transactionlog order by  
transactionid;

  QUERY PLAN
-
Sort  (cost=8408637.34..8534662.95 rows=50410244 width=17)
  Sort Key: a.transactionid
  ->  Seq Scan on metarelcloud_transactionlog a
(cost=0.00..925543.44 rows=50410244 width=17)
(3 rows)

Anyway, I added the INDEX as suggested by Frank, but it's been 20
minutes and it's still running. With the index, EXPLAIN says:

tpcc=# explain select * from metarelcloud_transactionlog order by  
transactionid;

  QUERY PLAN
-
Index Scan using i_transactionid on metarelcloud_transactionlog
(cost=0.00..4453076.81 rows=50410164 width=44)
(1 row)



Use EXPLAIN ANALYZE to see how the query is executed, gives you more  
details.



If your work-mem is too low there's a good chance that Postgres has  
to use your disks for sorting, which will obviously be quite slow.


Relative to the non-terminating 80-minute-so-far sort, Unix sort runs
much faster (on the order of several minutes).


Make sure your index does fit into memory, what's the size of the index?



--
Yang Zhang
http://www.mit.edu/~y_z/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Frank Heikens




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Info about concurrent sequential scans

2010-02-22 Thread Daniele Varrazzo
Hello,

at Prato PgDay in 2007 I remember hearing in a speech about a (then
yet to come) "seqscan piggyback" feature, allowing concurrent
sequential scans to use the same disk reads. I've now googled for info
about this feature, but I found nothing conclusive (e.g. [1], [2] -
which I don't know where is linked).

So I'd like to know:

is this feature currently implemented (I'm specifically interested in PG 8.3)?
Is there any prerequisite needed to benefit from it (config setting,
query characteristic, etc.)?
Is there any feedback showing the feature kicking in (e.g. an explain
analyze output, log, etc.)?

Thank you very much.

-- Daniele

[1] http://wordpress.metro.cx/2008/02/24/postgresql-83/
[2] http://www.postgresql.org/about/featuredetail/feature.26

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 2:15 PM, Frank Heikens  wrote:
>
> Op 22 feb 2010, om 20:07 heeft Yang Zhang het volgende geschreven:
>
>> On Mon, Feb 22, 2010 at 1:48 PM, Alban Hertroys
>>  wrote:
>>>
>>> On 22 Feb 2010, at 19:35, Yang Zhang wrote:
>>>
 I also wouldn't have imagined an external merge-sort as being very
>>>
>>>
>>> Where's that external merge-sort coming from? Can you show an explain
>>> analyze?
>>
>> I just assumed that the "Sort" in the EXPLAIN output meant an external
>> merge-sort, given that the table has over 50 million tuples and is
>> over 3GB, *and* there is no index on the sort key:
>>
>> tpcc=# explain select * from metarelcloud_transactionlog order by
>> transactionid;
>>                                                  QUERY PLAN
>>
>> -
>> Sort  (cost=8408637.34..8534662.95 rows=50410244 width=17)
>>  Sort Key: a.transactionid
>>  ->  Seq Scan on metarelcloud_transactionlog a
>> (cost=0.00..925543.44 rows=50410244 width=17)
>> (3 rows)
>>
>> Anyway, I added the INDEX as suggested by Frank, but it's been 20
>> minutes and it's still running. With the index, EXPLAIN says:
>>
>> tpcc=# explain select * from metarelcloud_transactionlog order by
>> transactionid;
>>                                                  QUERY PLAN
>>
>> -
>> Index Scan using i_transactionid on metarelcloud_transactionlog
>> (cost=0.00..4453076.81 rows=50410164 width=44)
>> (1 row)
>>
>
> Use EXPLAIN ANALYZE to see how the query is executed, gives you more
> details.

As mentioned, this would take a very long time to complete running --
I have not yet seen one successful query completion yet. I'd need to
let it run overnight.

>
>
>>> If your work-mem is too low there's a good chance that Postgres has to
>>> use your disks for sorting, which will obviously be quite slow.
>>
>> Relative to the non-terminating 80-minute-so-far sort, Unix sort runs
>> much faster (on the order of several minutes).
>
> Make sure your index does fit into memory, what's the size of the index?

How might I find out the size and whether it's being fit in memory?
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Alvaro Herrera
Yang Zhang escribió:
> I have the exact same table of data in both MySQL and Postgresql. In 
> Postgresql:

I just noticed two things:

[snip lots of stuff]

1.

> ) ENGINE=MyISAM AUTO_INCREMENT=50410166 DEFAULT CHARSET=latin1

You're doing a comparison to MyISAM.


2.

>   select * from metarelcloud_transactionlog order by transactionid;

You're reading the whole table.

This is unlikely to fly very far.  I suggest you try some query that's
actually going to be used in the real world.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 2:27 PM, Alvaro Herrera
 wrote:
> Yang Zhang escribió:
>> I have the exact same table of data in both MySQL and Postgresql. In 
>> Postgresql:
>
> I just noticed two things:
>
> [snip lots of stuff]
>
> 1.
>
>> ) ENGINE=MyISAM AUTO_INCREMENT=50410166 DEFAULT CHARSET=latin1
>
> You're doing a comparison to MyISAM.

We've actually been using innodb as well; it exhibits similar
execution times to MyISAM.

>
>
> 2.
>
>>   select * from metarelcloud_transactionlog order by transactionid;
>
> You're reading the whole table.
>
> This is unlikely to fly very far.  I suggest you try some query that's
> actually going to be used in the real world.

This isn't some microbenchmark. This is part of our actual analytical
application. We're running large-scale graph partitioning algorithms.
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Scott Marlowe
On Mon, Feb 22, 2010 at 12:30 PM, Yang Zhang  wrote:
> This isn't some microbenchmark. This is part of our actual analytical
> application. We're running large-scale graph partitioning algorithms.

It's important to see how it runs if you can fit more / most of the
data set into memory by cranking up work_mem to something really big
(like a gigabyte or two) and if the query planner can switch to some
sort of hash algorithm.

Also, can you cluster the table on transactionid ?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Frank Heikens


Op 22 feb 2010, om 20:28 heeft Yang Zhang het volgende geschreven:






If your work-mem is too low there's a good chance that Postgres  
has to

use your disks for sorting, which will obviously be quite slow.


Relative to the non-terminating 80-minute-so-far sort, Unix sort  
runs

much faster (on the order of several minutes).


Make sure your index does fit into memory, what's the size of the  
index?


How might I find out the size and whether it's being fit in memory?


SELECT pg_size_pretty(pg_relation_size('i_transactionid'));



--
Yang Zhang
http://www.mit.edu/~y_z/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Frank Heikens




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex SELECT Statement help needed

2010-02-22 Thread flashbangpop

Thanks, I got these working:

$query = "SELECT
lessonwebid,lessontitle,gradelevel,grademin,grademax,reviewedby FROM
lessonplans WHERE grademin >= 3 AND grademin <= 5 OR grademax >= 3 AND
grademax <= 5";




Raymond O'Donnell wrote:
> 
> On 22/02/2010 18:14, flashbangpop wrote:
> 
>> I'm new to postgres and am trying to display rows in a table that are
>> have a
>> value between 2 fields. 
>> 
>> The rows I want to display should be selected by the grademin or grademax
>> fields. I'd like to display the rows in sections that are labeled Grades
>> 3-5, 6-8, and 9-12.
>> 
>> The problem that I am having is I can't figure out how to write a
>> statement
>> that will check both fields. And, especially when I have values in the
>> like
>> this:
>> 
>> grademin 4 grademax 12 that I want to show up in all levels
>> grademin 5 grademax 12 "" 
>> grademin 6 grademax 9 in 6-9 and 9-12
>> 
>> Below is what I have so far, but it falls flat on it's face when trying
>> to
>> display rows that don't fall between the specific range. 
>> 
>> How can I check grademin and max in the same statement?
> 
> Are grademin and grademax two separate columns in the table? If so,
> simply imcude as many conditions as you need:
> 
>   select 
>   where grademin ...
>   and grademax 
>   and 
> 
> 
> Ray.
> 
> 
> -- 
> Raymond O'Donnell :: Galway :: Ireland
> r...@iol.ie
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Complex-SELECT-Statement-help-needed-tp27691760p27692084.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 2:39 PM, Scott Marlowe  wrote:
> On Mon, Feb 22, 2010 at 12:30 PM, Yang Zhang  wrote:
>> This isn't some microbenchmark. This is part of our actual analytical
>> application. We're running large-scale graph partitioning algorithms.
>
> It's important to see how it runs if you can fit more / most of the
> data set into memory by cranking up work_mem to something really big
> (like a gigabyte or two) and if the query planner can switch to some
> sort of hash algorithm.

We're actually using a very small dataset right now. Being bounded by
memory capacity is not a scalable approach for our application.

>
> Also, can you cluster the table on transactionid ?
>

We can, but that's not really addressing the core issue, which matters
to us since the sort itself is only for performing a self merge join
on transactionid, and the *very next step* is a group by a.tableid,
a.tupleid, b.tableid, b.tupleid (i.e. requiring another sort for the
group-agg).
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Scott Marlowe
On Mon, Feb 22, 2010 at 11:10 AM, Yang Zhang  wrote:
> I have the exact same table of data in both MySQL and Postgresql. In 
> Postgresql:

Just wondering, are these on the same exact machine?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Info about concurrent sequential scans

2010-02-22 Thread Bruce Momjian
Daniele Varrazzo wrote:
> Hello,
> 
> at Prato PgDay in 2007 I remember hearing in a speech about a (then
> yet to come) "seqscan piggyback" feature, allowing concurrent
> sequential scans to use the same disk reads. I've now googled for info
> about this feature, but I found nothing conclusive (e.g. [1], [2] -
> which I don't know where is linked).
> 
> So I'd like to know:
> 
> is this feature currently implemented (I'm specifically interested in PG 8.3)?
> Is there any prerequisite needed to benefit from it (config setting,
> query characteristic, etc.)?
> Is there any feedback showing the feature kicking in (e.g. an explain
> analyze output, log, etc.)?

The feature is called 'synchronize_seqscans' and was implemented in
Postgres 8.3.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 2:52 PM, Scott Marlowe  wrote:
> On Mon, Feb 22, 2010 at 11:10 AM, Yang Zhang  wrote:
>> I have the exact same table of data in both MySQL and Postgresql. In 
>> Postgresql:
>
> Just wondering, are these on the same exact machine?
>

Yes, on the same disk.
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Scott Marlowe
On Mon, Feb 22, 2010 at 12:50 PM, Yang Zhang  wrote:
> On Mon, Feb 22, 2010 at 2:39 PM, Scott Marlowe  
> wrote:
>> On Mon, Feb 22, 2010 at 12:30 PM, Yang Zhang  wrote:
>>> This isn't some microbenchmark. This is part of our actual analytical
>>> application. We're running large-scale graph partitioning algorithms.
>>
>> It's important to see how it runs if you can fit more / most of the
>> data set into memory by cranking up work_mem to something really big
>> (like a gigabyte or two) and if the query planner can switch to some
>> sort of hash algorithm.
>
> We're actually using a very small dataset right now. Being bounded by
> memory capacity is not a scalable approach for our application.

But the more you can fit into work_mem the faster it will go anyway.
So it's still worth a try.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Info about concurrent sequential scans

2010-02-22 Thread Andreas Kretschmer
Daniele Varrazzo  wrote:

> Hello,
> 
> at Prato PgDay in 2007 I remember hearing in a speech about a (then
> yet to come) "seqscan piggyback" feature, allowing concurrent
> sequential scans to use the same disk reads. I've now googled for info
> about this feature, but I found nothing conclusive (e.g. [1], [2] -
> which I don't know where is linked).
> 
> So I'd like to know:
> 
> is this feature currently implemented (I'm specifically interested in PG 8.3)?

I think, you means this:

 Concurrent large sequential scans can now share disk reads (Jeff Davis)

This is accomplished by starting the new sequential scan in the middle
of the table (where another sequential scan is already in-progress) and
wrapping around to the beginning to finish. This can affect the order of
returned rows in a query that does not specify ORDER BY. The
synchronize_seqscans configuration parameter can be used to disable this
if necessary

Source:
http://www.postgresql.org/docs/current/interactive/release-8-3.html


> Is there any prerequisite needed to benefit from it (config setting,
> query characteristic, etc.)?

No.



Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 2:41 PM, Frank Heikens  wrote:
>
> Op 22 feb 2010, om 20:28 heeft Yang Zhang het volgende geschreven:
>>
>>
>>>
>>>
> If your work-mem is too low there's a good chance that Postgres has to
> use your disks for sorting, which will obviously be quite slow.

 Relative to the non-terminating 80-minute-so-far sort, Unix sort runs
 much faster (on the order of several minutes).
>>>
>>> Make sure your index does fit into memory, what's the size of the index?
>>
>> How might I find out the size and whether it's being fit in memory?
>
> SELECT pg_size_pretty(pg_relation_size('i_transactionid'));

 pg_size_pretty

 1080 MB
(1 row)
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing clog, PITR

2010-02-22 Thread Richard Huxton

On 22/02/10 09:57, Patryk Sidzina wrote:

Hello everyone,

my company has been using pg_standby as a replication solution for a
while and it has been working great for our needs. Unfortunately, about
once a month we get the following error on the standby bases:

vacuumdb: vacuuming of database "" failed: ERROR:  could not access
status of transaction 3625953267
DETAIL:  Could not open file "pg_clog/0D81": No such file or directory.


Is vacuum running on the standby? It's been a long day here, but I'm not 
sure that makes sense.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Info about concurrent sequential scans

2010-02-22 Thread Daniele Varrazzo
On Mon, Feb 22, 2010 at 7:59 PM, Andreas Kretschmer
 wrote:
> Daniele Varrazzo  wrote:
>
>> Hello,
>>
>> at Prato PgDay in 2007 I remember hearing in a speech about a (then
>> yet to come) "seqscan piggyback" feature, allowing concurrent
>> sequential scans to use the same disk reads. I've now googled for info
>> about this feature, but I found nothing conclusive (e.g. [1], [2] -
>> which I don't know where is linked).

> I think, you means this:
>
>  Concurrent large sequential scans can now share disk reads (Jeff Davis)
>
> This is accomplished by starting the new sequential scan in the middle
> of the table (where another sequential scan is already in-progress) and
> wrapping around to the beginning to finish. This can affect the order of
> returned rows in a query that does not specify ORDER BY. The
> synchronize_seqscans configuration parameter can be used to disable this
> if necessary
>
> Source:
> http://www.postgresql.org/docs/current/interactive/release-8-3.html

Thank you very much (and thanks to Bruce too). This is the feature and
I'm checking it works "out of the box" on PG 8.3

-- Daniele

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yeb Havinga

Scott Marlowe wrote:

On Mon, Feb 22, 2010 at 11:10 AM, Yang Zhang  wrote:
  

I have the exact same table of data in both MySQL and Postgresql. In Postgresql:



Just wondering, are these on the same exact machine?

  
Just reading up on this interesting thread. WFIW, 2 years ago I and a 
collegue of mine did a hardware comparison of early Intel and AMD 
desktop quadcore processors to run postgres database, with most other 
parts comparable. The intel processor was 20 to 30 % faster in cpu 
operations to the (first generation) Phenom at almost everything, except 
at index creation. The test that the AMD finished in a few minutes, we 
had to stop on the Intel because it simply didn't finish. We double 
checked configuration settings and could not find explainable 
differences. I hesitate to post this information here, because its hard 
to believe that an actual big difference between the processors exists, 
and it more likely was something in our test setup. Still: the 
difference was *only* in index creation, which is kindoff like a qsort yes?


egards
Yeb Havinga





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Tom Lane
Yang Zhang  writes:
> On Mon, Feb 22, 2010 at 1:13 PM, Pavel Stehule  
> wrote:
>> the speed depends on setting of working_memory. Try to increase a 
>> working_memory

> It's already at
>  2kB

According to your original posting, you're trying to sort something like
a gigabyte of data.  20MB is peanuts.  I wouldn't recommend increasing
the value across-the-board, but setting it to several hundred meg for
this particular query might help.  How much RAM in your machine anyway?

Also, the fact that mysql is faster suggests that having an index does help.
Possibly the data is nearly ordered by transactionid, in which case an
indexscan would not have random-access problems and would be much faster
than an explicit sort.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Scott Marlowe
On Mon, Feb 22, 2010 at 12:53 PM, Yang Zhang  wrote:
> On Mon, Feb 22, 2010 at 2:52 PM, Scott Marlowe  
> wrote:
>> On Mon, Feb 22, 2010 at 11:10 AM, Yang Zhang  wrote:
>>> I have the exact same table of data in both MySQL and Postgresql. In 
>>> Postgresql:
>>
>> Just wondering, are these on the same exact machine?
>>
>
> Yes, on the same disk.

I'm wondering how much of this could be caching effects.  Is the MySQL
database "warmed up" before you started, and the pgsql database is
"cold" and no caching has taken place?

What do things like vmstat 10 say while the query is running on each
db?  First time, second time, things like that.

Also, just curios, what's shared_buffers set to on the pgsql instance?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SET Role doesn't work from Security Definer Function...

2010-02-22 Thread Tom Lane
dipti shah  writes:
> I have just noticed that "SET ROLE" doesn't work from security definer
> function. I don;t know why but it clearly gives the error that SET role
> doesn;t work in security definer context.

This is intentional because allowing it creates security holes.

> If I create function in postgres user with Security Definer enabled, it will
> allow to create any table with any foreign references etc...So I am setting
> role to current_user in my function and then creating a table to make sure
> that user has the appropriate privilege.

Well, if you are trying to set the role back to current, why don't you
just not have the function be security definer in the first place?

I suppose the answer to that is that you want it to do some things as
superuser and some things not.  In which case, you need to refactor so
that those two classes of things are done by different functions.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 3:44 PM, Tom Lane  wrote:
> Yang Zhang  writes:
>> On Mon, Feb 22, 2010 at 1:13 PM, Pavel Stehule  
>> wrote:
>>> the speed depends on setting of working_memory. Try to increase a 
>>> working_memory
>
>> It's already at
>>  2kB
>
> According to your original posting, you're trying to sort something like
> a gigabyte of data.  20MB is peanuts.  I wouldn't recommend increasing
> the value across-the-board, but setting it to several hundred meg for
> this particular query might help.  How much RAM in your machine anyway?

We have 16GB of RAM, but again, Unix sort (and even our own
hand-rolled merge-sort) can operate zippily while avoiding consuming
additional memory.

All the same, we increased work_mem to 1GB, and still the query is not
completing.

>
> Also, the fact that mysql is faster suggests that having an index does help.
> Possibly the data is nearly ordered by transactionid, in which case an
> indexscan would not have random-access problems and would be much faster
> than an explicit sort.

Note that earlier in the thread I tried running this query with an
index scan, but it's still much slower.
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Tom Lane
Yeb Havinga  writes:
> Just reading up on this interesting thread. WFIW, 2 years ago I and a 
> collegue of mine did a hardware comparison of early Intel and AMD 
> desktop quadcore processors to run postgres database, with most other 
> parts comparable. The intel processor was 20 to 30 % faster in cpu 
> operations to the (first generation) Phenom at almost everything, except 
> at index creation. The test that the AMD finished in a few minutes, we 
> had to stop on the Intel because it simply didn't finish. We double 
> checked configuration settings and could not find explainable 
> differences. I hesitate to post this information here, because its hard 
> to believe that an actual big difference between the processors exists, 
> and it more likely was something in our test setup. Still: the 
> difference was *only* in index creation, which is kindoff like a qsort yes?

Interesting.  Yes, btree index creation is essentially a sort ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Igor Neyman
When in doubt - test.
Why not remove index in MySQL (or create index in PostgreSQL) and see
what happens.
Why trying compare "apples and oranges"?

Igor Neyman 

> -Original Message-
> From: Yang Zhang [mailto:yanghates...@gmail.com] 
> Sent: Monday, February 22, 2010 1:37 PM
> To: Richard Broersma
> Cc: Frank Heikens; pgsql-general@postgresql.org
> Subject: Re: Sorting performance vs. MySQL
> 
> On Mon, Feb 22, 2010 at 1:30 PM, Richard Broersma 
>  wrote:
> > On Mon, Feb 22, 2010 at 10:17 AM, Frank Heikens 
>  wrote:
> >
> >> There is no index on the column transactionid in your 
> >> PostgreSQL-table, as there is in your MySQL-table. This 
> explains the difference.
> >>
> >> CREATE INDEX i_transactionid ON public.metarelcloud_transactionlog
> >> (transactionid);
> >
> > Does an index help a sort operation in PostgreSQL?
> 
> I also share the same doubt. An external merge-sort needs to 
> make complete passes over the entire dataset, with no 
> index-directed accesses.
> --
> Yang Zhang
> http://www.mit.edu/~y_z/
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 5:31 PM, Igor Neyman  wrote:
> When in doubt - test.
> Why not remove index in MySQL (or create index in PostgreSQL) and see
> what happens.
> Why trying compare "apples and oranges"?

Continue reading this thread -- I also tried using an index in Postgresql.
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Scott Marlowe
On Mon, Feb 22, 2010 at 3:33 PM, Yang Zhang  wrote:
> On Mon, Feb 22, 2010 at 5:31 PM, Igor Neyman  wrote:
>> When in doubt - test.
>> Why not remove index in MySQL (or create index in PostgreSQL) and see
>> what happens.
>> Why trying compare "apples and oranges"?
>
> Continue reading this thread -- I also tried using an index in Postgresql.

But have you tried cranking up work_mem to say 1G?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Scott Marlowe
On Mon, Feb 22, 2010 at 3:33 PM, Yang Zhang  wrote:
> On Mon, Feb 22, 2010 at 5:31 PM, Igor Neyman  wrote:
>> When in doubt - test.
>> Why not remove index in MySQL (or create index in PostgreSQL) and see
>> what happens.
>> Why trying compare "apples and oranges"?
>
> Continue reading this thread -- I also tried using an index in Postgresql.

And oh yeah, what was shared_buffers set to?  I'm not sure we ever got
an answer to that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tuning bgwriter in 8.4.2

2010-02-22 Thread Ben Chobot
On Feb 17, 2010, at 6:38 PM, Greg Smith wrote:

> Ben Chobot wrote:
>> Is there a way to tell if I really am just keeping the same few pages dirty 
>> throughout every checkpoint? I wouldn't have expected that, but given our 
>> application I suppose it is possible.
> 
> You can install pg_buffercache and look at what's in the cache to check your 
> theory.  I have some sample queries that show neat things at 
> http://www.westnet.com/~gsmith/content/postgresql/bufcache.sh


This appears to be fairly typical:

# select count(*),isdirty,usagecount from pg_buffercache group by 
isdirty,usagecount order by usagecount desc,isdirty;
 count  | isdirty | usagecount 
+-+
 670629 | f   |  5
  75766 | t   |  5
 237311 | f   |  4
   5372 | t   |  4
  74682 | f   |  3
 31 | t   |  3
  73786 | f   |  2
 18 | t   |  2
 104112 | f   |  1
 62 | t   |  1
  68951 | f   |  0
(11 rows)

Is it reading it correctly to say that the bgwriter probably wouldn't help 
much, because a majority of the dirty pages appear to be popular?

Re: [GENERAL] tuning bgwriter in 8.4.2

2010-02-22 Thread Greg Smith

Ben Chobot wrote:
Is it reading it correctly to say that the bgwriter probably wouldn't 
help much, because a majority of the dirty pages appear to be popular?


Yes.  The background writer cleaner process only does something useful 
if there are pages with low usage counts it can evict.  You would need 
to increase shared_buffers significantly before it's likely that would 
happen.  Right now, 87% of your buffer cache has a usage count of 2 or 
higher, which basically means it's filled with almost nothing but the 
working set of data it never wants to evict unless it's for a checkpoint.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Alex Hunsaker
On Mon, Feb 22, 2010 at 11:10, Yang Zhang  wrote:
> I have the exact same table of data in both MySQL and Postgresql. In ?> 
> Postgresql:

FWIW on a stock (unchanged postgresql.conf) 8.3.9 I get (best of 3
runs) 79 seconds, 26 using an index and 27 seconds with it clustered.
Now yes it goes a lot faster because im skipping all the overhead of
sending the data to the client...  But still that means it has almost
nothing with the sort or indexs.  Leaves pretty much your cpu, disk,
filesystem and network...  BTW the first time I did it it had to write
out the hint bits so that took (a bit) longer... Is this freshly
loaded data?

---

# create table metarelcould_transactionlog (
id serial primary key,
transactionid integer not null,
queryid smallint not null,
tableid varchar(30) not null,
tupleid integer not null,
querytype varchar not null,
graphpartition smallint,
replicatedpartition smallint,
justifiedpartition smallint,
hashpartition smallint,
modeid integer,
manualpartition smallint
);

# insert into metarelcould_transactionlog (transactionid, queryid,
tableid, tupleid, querytype, graphpartition, replicatedpartition,
justifiedpartition, hashpartition, modeid, manualpartition) select
foo,  1, 'sometable', 1, 's', 1, 1, 1, 1, 1, 1 from generate_series(1,
5000) as foo;

# select count(1) from (SELECT * from metarelcould_transactionlog
order by transactionid) as foo;
  count
--
 5000
(1 row)

Time: 79017.186 ms

# create index idx on metarelcould_transactionlog (transactionid);
# select count(1) from (SELECT * from metarelcould_transactionlog
order by transactionid) as foo;
  count
--
 5000
(1 row)

Time: 26230.534 ms

# cluster metarelcould_transactionlog USING  idx;
CLUSTER
Time: 342381.535 ms

# select count(1) from (SELECT * from metarelcould_transactionlog
order by transactionid) as foo;
  count
--
 5000
(1 row)

Time: 27704.794 ms

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Anyone interested in a San Diego Postgres Users Group (SD-PUG)?

2010-02-22 Thread Joe Conway
Anyone out there interested in a San Diego PostgreSQL Users Group?

If so, I created a meetup here:
   http://www.meetup.com/SD-PUG/

Please sign up and contact me off list.

Thanks,

Joe



signature.asc
Description: OpenPGP digital signature


[GENERAL] Table Column Retrieval

2010-02-22 Thread dmp


Recently I read that one of the distinctions between a standard database  
and
a columnar one, which led to an increase in its efficiency, was and I  
quote:


"Only relevant columns are retrieved (A row-wise database would pull
all columns and typically discard 80-95% of them)"

Is this true of PostgreSQL? That eventhough my query does not call for a
column it is still pulled from the table row(s). I know that my client via
the JDBC does not contain the data in the ResultSet for the column, because
of the packet monitoring I have done on queries.

danap

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Column Retrieval

2010-02-22 Thread Joshua Tolley
On Mon, Feb 22, 2010 at 07:23:09PM -0700, dmp wrote:
>
> Recently I read that one of the distinctions between a standard database  
> and
> a columnar one, which led to an increase in its efficiency, was and I  
> quote:
>
> "Only relevant columns are retrieved (A row-wise database would pull
> all columns and typically discard 80-95% of them)"
>
> Is this true of PostgreSQL? That eventhough my query does not call for a
> column it is still pulled from the table row(s). I know that my client via
> the JDBC does not contain the data in the ResultSet for the column, because
> of the packet monitoring I have done on queries.

PostgreSQL doesn't use columnar storage. Data are read from the disk in pages,
and those pages contain not only the columns you're interested in but all the
other columns in the table as well. The parts of the table you're not
interested in aren't returned as part of the query, and thus don't show up in
your result set, but they do get read from disk.

The disadvantage of a columnar system is that when you read multiple columns,
you have to piece together the rows of the table using columns read from
various parts of the disk, effectively identical to doing a bunch of joins.
For some workloads the columnar storage is a win, and for some workloads,
row-based storage is the best bet.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


[GENERAL] how do I do dump and restore without bugging with constraint?

2010-02-22 Thread Net Tree Inc.
Hi all,

I am dumping both schema and data from old database to new one. The new
database schema is somehow contain slightly different schema then the old
one. When I do restore it shown alot errors related with constraints. How
can I dump and to restore from old to new without dealing with constraint
and just forces data dump to where it suppose to belong?


[GENERAL] Re: [ADMIN] how do I do dump and restore without bugging with constraint?

2010-02-22 Thread Kevin Grittner
"Net Tree Inc."  wrote:
 
> When I do restore it shown alot errors related with constraints.
> How can I dump and to restore from old to new without dealing with
> constraint and just forces data dump to where it suppose to belong?
 
What version are you dumping from and to?  Exactly what errors are
you getting?  (Copy and paste samples, please.)
 
You might want to review this page:
 
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
 
-Kevin



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how do I do dump and restore without bugging with constraint?

2010-02-22 Thread John R Pierce

Net Tree Inc. wrote:

Hi all,

I am dumping both schema and data from old database to new one. The 
new database schema is somehow contain slightly different schema then 
the old one. When I do restore it shown alot errors related with 
constraints. How can I dump and to restore from old to new without 
dealing with constraint and just forces data dump to where it suppose 
to belong? 








if you're dumping the schema and the data, you should be restoring into 
an empty database, not into an existing database.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY command character set

2010-02-22 Thread Bruce Momjian

I have updated the documentation to be more direct about COPY encoding
behavior.  Patch attached and applied.

---

Peter Headland wrote:
> > Maybe the link might help?
> > 
> > http://www.postgresql.org/docs/8.4/interactive/multibyte.html
> 
> That page is too generic; what would be helpful is a section in the doc for 
> each command that is affected by I18N/L10N considerations, that identifies 
> how that specific command behaves.
> 
> Now that I have grasped the behavior, I'm more than happy to edit the COPY 
> doc page, if people think that would be helpful/worthwhile.
> 
> -- 
> Peter Headland
> Architect
> Actuate Corporation
> 
> 
> -Original Message-
> From: Adrian Klaver [mailto:akla...@comcast.net] 
> Sent: Thursday, September 10, 2009 11:06
> To: Peter Headland
> Cc: pgsql-general@postgresql.org; Tom Lane
> Subject: Re: [GENERAL] COPY command character set
> 
> 
> - "Peter Headland"  wrote:
> 
> > > The COPY command reference page saith
> > >
> > >Input data is interpreted according to the current client
> > encoding,
> > >and output data is encoded in the the current client encoding,
> > even
> > >if the data does not pass through the client but is read from or
> > >written to a file.
> > 
> > Rats - I read the manual page twice and that didn't register on my
> > feeble consciousness. I suspect that I didn't look beyond the word
> > "client", since I knew I wasn't interested in client behavior and I
> > was
> > speed-reading. On the assumption that I am not uniquely stupid, maybe
> > we
> > could re-phrase this slightly, with a "for example", and add a
> > heading
> > "Localization"?
> > 
> > As a general comment, I18N/L10N is a hairy enough topic that it
> > merits
> > its own heading in any commands where it is an issue.
> > 
> > How about my suggestion to add a means (extend COPY syntax) to
> > specify
> > encoding explicitly and handle UTF lead bytes - would that be of
> > interest?
> > 
> > -- 
> > Peter Headland
> > Architect
> > Actuate Corporation
> > 
> 
> > 
> > The COPY command reference page saith
> > 
> > Input data is interpreted according to the current client
> > encoding,
> > and output data is encoded in the the current client encoding,
> > even
> > if the data does not pass through the client but is read from or
> > written to a file. 
> > 
> > Seems clear enough to me.
> > 
> > regards, tom lane
> 
> Maybe the link might help?
> 
> http://www.postgresql.org/docs/8.4/interactive/multibyte.html
> 
> 
> Adrian Klaver
> akla...@comcast.net
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/ref/copy.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/ref/copy.sgml,v
retrieving revision 1.93
diff -c -c -r1.93 copy.sgml
*** doc/src/sgml/ref/copy.sgml	17 Feb 2010 04:19:39 -	1.93
--- doc/src/sgml/ref/copy.sgml	23 Feb 2010 05:15:00 -
***
*** 367,376 
 
  
 
! Input data is interpreted according to the current client encoding,
! and output data is encoded in the the current client encoding, even
! if the data does not pass through the client but is read from or
! written to a file.
 
  
 
--- 367,376 
 
  
 
! COPY always processes data according to the
! current client encoding, even if the data does not pass through
! the client but is read from or written to a file directly by the
! server.
 
  
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] comment on constraint

2010-02-22 Thread Bruce Momjian
Andreas Kretschmer wrote:
> Hi,
> 
> There is a question in the german pg-forum:
> 
> It is possible to add a comment on a constraint, but \dd doesn't display
> that comment. There is also a old question in this mailing-list without
> an answer:
> http://archives.postgresql.org/pgsql-general/2003-07/msg01448.php
> 
> I think, this is a bug, isn't it?

Yep, added to TODO:

Allow \dd to show constraint comments

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
nOn Mon, Feb 22, 2010 at 3:45 PM, Scott Marlowe
 wrote:
> On Mon, Feb 22, 2010 at 12:53 PM, Yang Zhang  wrote:
>> On Mon, Feb 22, 2010 at 2:52 PM, Scott Marlowe  
>> wrote:
>>> On Mon, Feb 22, 2010 at 11:10 AM, Yang Zhang  wrote:
 I have the exact same table of data in both MySQL and Postgresql. In 
 Postgresql:
>>>
>>> Just wondering, are these on the same exact machine?
>>>
>>
>> Yes, on the same disk.
>
> I'm wondering how much of this could be caching effects.  Is the MySQL
> database "warmed up" before you started, and the pgsql database is
> "cold" and no caching has taken place?
>
> What do things like vmstat 10 say while the query is running on each
> db?  First time, second time, things like that.

Awesome -- this actually led me to discover the problem.

When running the query in MySQL InnoDB:

$ vmstat 10
procs ---memory-- ---swap-- -io --system--
-cpu--
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
 0 13 13733604  83020   5648 219388433   936   16821
4  2 89  5  0
 1 12 13749952  80164   5600 21780320 4354   908  4379 3586 2638
0  1 38 60  0
 0 19 13762228  80576   5556 2145220  208 3527  1280  3690 3668 2635
1  1 39 59  0
 0 19 13778632  79420   5560 2135228   52 4186  1046  4191 3682 2418
0  1 37 62  0
 0 19 13792964  77336   5592 2082520   41 3731  1698  3804 4102 2686
1  1 53 45  0
 0 14 13810356  84036   5556 2049836   36 4241   797  4246 3913 2603
0  1 68 31  0
 1 14 13825640  81336   5520 20019200 4212   958  4220 3848 2736
1  1 73 25  0
 0 17 13844952  78036   5476 19769568 4685   923  4689 3832 2547
0  1 69 29  0
 2 13 13863828  79812   5448 19549523 4627   692  4634 3744 2505
0  1 70 28  0
 0 15 13883828  77764   5440 1920528  249 4544   972  4548 4345 2506
0  1 70 28  0
 1 20 13898900  79132   5456 1890192   28 4341   723  4438 4982 3030
0  3 64 33  0
 0 11 13915252  85184   5624 1865260   79 3668   752  3764 4472 2765
0  3 57 40  0
 0 12 13933964  78448   5700 1832640  120 4327  1066  4434 4484 2777
1  3 52 45  0
 0 19 13951748  77640   5816 1795720   94 4005  1159  4091 4580 2762
1  3 48 49  0
 0 16 13972748  79884   5780 17536760 4737   787  4746 4385 2766
1  3 51 45  0
 0 25 13988108  78936   5884 1726068  547 3954  1468  4116 4976 3502
0  4 44 52  0
 1 20 14011500  77676   5868 1689136  161 4980   843  5506 5218 3131
0  3 34 62  0
 0 22 14032472  81348   5816 1647884  270 4198   943  4369 4521 2826
1  3 40 56  0
 0 23 14055220  81712   5804 1626872  193 4774  1408  4856 4617 2754
1  3 38 58  0
 0 21 14075848  81844   5696 15768360 4738   974  4742 4528 2704
1  3 40 56  0
 0 25 14097260  79788   5628 1536336  213 4512   922  4639 4726 2929
1  3 27 69  0
 0 24 14123900  80820   5616 1488460  319 5033  1059  5128 4895 2780
2  3 17 78  0
 1 26 14142632  77276   5660 1445592  445 4605  1434  4727 5401 3364
1  3 16 79  0
 0 31 14165668  83736   5976 1387048  362 4288  1444  4428 4739 2963
2  3 17 78  0
 1 28 14180104  77564   6324 1369232  387 4526    4677 5748 3559
1  3 16 80  0

I'm guessing the swap numbers are because MySQL uses mmap?

Anyway, when running the query in Postgresql:

$ vmstat 10
procs ---memory-- ---swap-- -io --system--
-cpu--
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
 1  2 13866336 1574540  25024 787898033   936   16821
4  2 89  5  0
 1  3 13861520 1163596  25328 8128360 10460 25429   433 4368 4175
4  2 80 14  0
 0  3 13856484 803024  25600 8343220 11170 22062   688 4492 4590
4  2 73 20  0
 0  6 13855304 678868  26052 8435540  1600  9239   598 5195 7141
1  5 70 24  0
 0  6 13853644 513568  26332 8563680  4010 12480  7100 4775 4248
3  3 68 26  0
 2  2 13851804 166228  26624 8775304  6340 21466  1497 4680 4550
6  2 64 28  0
 0  5 13861556  81896  26740 8825360  860 3547  6100  3847 5142 3386
6  2 57 35  0
 0  6 13867688  91368  26808 8832712  653 3326  1835  3604 4738 2762
5  2 61 32  0
 0  5 13870676  88524  26872 8849392  638 3272  2578  3517 4864 2909
4  2 55 39  0
 0  5 13872748  79512  27004 8864456  629 1788  2086  2949 4337 2921
1  3 55 41  0
 0  7 13876760  83124  27136 8867272 1018 2253  1713  2409 4321 2889
0  3 63 33  0
 0  6 13878964  82876  27240 8874540  792 2119  1854  2314 4288 2813
2  2 72 24  0
 3  4 13883204  81224  27280 8887068  661 3067  2995  3385 4558 2899
4  2 72 22  0
 0  6 13886636  82036  27352 8905628  594 3726  2628  4013 4744 2765
4  2 69 25  0
 0  8 13899852  85604  27400 8925800  638 4423  2689  4658 4903 2808
4  2 55 40  0
 1  4 13905184  80544  27484 8940040  676 3501  3006  3799 4805 2932
4  2 66 28  0
 0  9 13908480  80100  27516 8948476  668 2996  1720  3192 4594 2799
4  2 60 35  0

vmstat showed no swapping-out for a while, and then suddenly it
started spilling a lot. Checking psql's memory stats showed that it
was huge -- apparently, it's trying to store its full result set in
memory. As soon as I add

Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Mon, Feb 22, 2010 at 9:30 PM, Alex Hunsaker  wrote:
> On Mon, Feb 22, 2010 at 11:10, Yang Zhang  wrote:
>> I have the exact same table of data in both MySQL and Postgresql. In ?> 
>> Postgresql:
>
> FWIW on a stock (unchanged postgresql.conf) 8.3.9 I get (best of 3
> runs) 79 seconds, 26 using an index and 27 seconds with it clustered.
> Now yes it goes a lot faster because im skipping all the overhead of
> sending the data to the client...

Last sentence also contributed to my realizing the problem (the client
I was using was psql), but there's one oddity

> # select count(1) from (SELECT * from metarelcould_transactionlog
> order by transactionid) as foo;

Does it strike anyone else that the query optimizer/rewriter should be
able to toss out the sort from such a query altogether?
--
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY command character set

2010-02-22 Thread Tom Lane
Bruce Momjian  writes:
> I have updated the documentation to be more direct about COPY encoding
> behavior.  Patch attached and applied.

Uh, why exactly do you find that better?  "Processes data" seems a lot
vaguer to me than the previous wording.  I certainly don't think that
this does much to address Peter's original complaint.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY command character set

2010-02-22 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian  writes:
> > I have updated the documentation to be more direct about COPY encoding
> > behavior.  Patch attached and applied.
> 
> Uh, why exactly do you find that better?  "Processes data" seems a lot
> vaguer to me than the previous wording.  I certainly don't think that
> this does much to address Peter's original complaint.

I thought the problem was that we said "input", then "output" and then
got to the point about the server, and I thought the reader just stopped
reading that far, so I tried to shorten it so the idea was sooner, and I
mentioned "server" at the end.  It might not be better, but I tried.

We don't want to highlight the input/output, we want to highlight that
all input and output are controlled by the client encoding.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Tom Lane
Yang Zhang  writes:
>> # select count(1) from (SELECT * from metarelcould_transactionlog
>> order by transactionid) as foo;

> Does it strike anyone else that the query optimizer/rewriter should be
> able to toss out the sort from such a query altogether?

It could, if it knew that the aggregate function didn't care about input
row order.  We don't have that knowledge about aggregates ATM.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Tom Lane
Yang Zhang  writes:
> I'm relieved that Postgresql itself does not, in fact, suck, but
> slightly disappointed in the behavior of psql. I suppose it needs to
> buffer everything in memory to properly format its tabular output,
> among other possible reasons I could imagine.

That's half of it, and the other half is not wanting to present a
portion of query output if the query fails partway through.  You could
certainly write a client that disregarded these issues (as I suppose
mysql must be doing).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how do I do dump and restore without bugging with constraint?

2010-02-22 Thread Net Tree Inc.
Ok, I am just trying to find the proper way to back and restore database
that contain restriction.

The other way to ask. If I have two 99% schema similar databases. The old
one might contain few columns that does not exist in the new one and contain
constrains that the new DB does not have. In that case, what's standard to
do this?

Since they are 99% similar in schema, can I do "data dump" only ? will that
work?

If owner for database between Old and new DB are different, do I must add
that owner in by create new role?

appreciated,

Steven


On Tue, Feb 23, 2010 at 1:09 PM, John R Pierce  wrote:

> Net Tree Inc. wrote:
>
>> Hi all,
>>
>> I am dumping both schema and data from old database to new one. The new
>> database schema is somehow contain slightly different schema then the old
>> one. When I do restore it shown alot errors related with constraints. How
>> can I dump and to restore from old to new without dealing with constraint
>> and just forces data dump to where it suppose to belong?
>>
>>
>>
>>
>
> if you're dumping the schema and the data, you should be restoring into an
> empty database, not into an existing database.
>
>
>


-- 
---
Steven Huang


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Scott Marlowe
On Mon, Feb 22, 2010 at 10:51 PM, Yang Zhang  wrote:
> nOn Mon, Feb 22, 2010 at 3:45 PM, Scott Marlowe
>  wrote:
>>
>> What do things like vmstat 10 say while the query is running on each
>> db?  First time, second time, things like that.
>
> Awesome -- this actually led me to discover the problem.
>
> vmstat showed no swapping-out for a while, and then suddenly it
> started spilling a lot. Checking psql's memory stats showed that it
> was huge -- apparently, it's trying to store its full result set in
> memory. As soon as I added a LIMIT 1, everything worked
> beautifully and finished in 4m (I verified that the planner was still
> issuing a Sort).
>
> I'm relieved that Postgresql itself does not, in fact, suck, but
> slightly disappointed in the behavior of psql. I suppose it needs to
> buffer everything in memory to properly format its tabular output,
> among other possible reasons I could imagine.

It's best when working with big sets to do so with a cursor and fetch
a few thousand rows at a time.  It's how we handle really big sets at
work and it works like a charm in keeping the client from bogging down
with a huge memory footprint.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Alex Hunsaker
On Mon, Feb 22, 2010 at 22:51, Yang Zhang  wrote:
> vmstat showed no swapping-out for a while, and then suddenly it
> started spilling a lot. Checking psql's memory stats showed that it
> was huge -- apparently, it's trying to store its full result set in
> memory. As soon as I added a LIMIT 1, everything worked
> beautifully and finished in 4m (I verified that the planner was still
> issuing a Sort).

Well im half surprised no one has recommend using a cursor.  Have you
looked in to that?  I bet that would fix most of your problems here.
>
> I'm relieved that Postgresql itself does not, in fact, suck, but
> slightly disappointed in the behavior of psql. I suppose it needs to
> buffer everything in memory to properly format its tabular output,
> among other possible reasons I could imagine.

Well AFAIK it will dump everything you asked for.  So if you said
select * from 1G table;  It should take at least 1G and potentially
quite a bit more formatting and overhead.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting performance vs. MySQL

2010-02-22 Thread Yang Zhang
On Tue, Feb 23, 2010 at 1:48 AM, Scott Marlowe  wrote:
> On Mon, Feb 22, 2010 at 10:51 PM, Yang Zhang  wrote:
>> nOn Mon, Feb 22, 2010 at 3:45 PM, Scott Marlowe
>>  wrote:
>>>
>>> What do things like vmstat 10 say while the query is running on each
>>> db?  First time, second time, things like that.
>>
>> Awesome -- this actually led me to discover the problem.
>>
>> vmstat showed no swapping-out for a while, and then suddenly it
>> started spilling a lot. Checking psql's memory stats showed that it
>> was huge -- apparently, it's trying to store its full result set in
>> memory. As soon as I added a LIMIT 1, everything worked
>> beautifully and finished in 4m (I verified that the planner was still
>> issuing a Sort).
>>
>> I'm relieved that Postgresql itself does not, in fact, suck, but
>> slightly disappointed in the behavior of psql. I suppose it needs to
>> buffer everything in memory to properly format its tabular output,
>> among other possible reasons I could imagine.
>
> It's best when working with big sets to do so with a cursor and fetch
> a few thousand rows at a time.  It's how we handle really big sets at
> work and it works like a charm in keeping the client from bogging down
> with a huge memory footprint.
>

Thing is, this is how I got here:

- ran complex query that does SELECT INTO.
- that never terminated, so killed it and tried a simpler SELECT (the
subject of this thread) from psql to see how long that would take.

I.e., my original application doesn't receive the entire dataset.
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to Output PSQL Errors to a Log File?

2010-02-22 Thread Wang, Mary Y
Hi,

I used 'pg_dumpall' to dump the database in Postgres 7.1.3 and I'm in the 
process of restoring the databases in Postgres 8.3.8.  There were lots errors 
showed on the screen when I ran this command 'psql -e template1 -f 21.bak' and 
it actually displayed the line numbers where the errors occurred.  Because 
there were so many errors, and I wanted the errors to be logged to a log file 
so I ran this command 'psql -e template1 -f 21.bak > late22error.txt', but 
late22error.txt didn't log the errors, it just logged the transactions.  What 
option should I use to tell psql to log all errors to a log file?

I'm running on RHEL 3.9.

Thanks
Mary




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to Output PSQL Errors to a Log File?

2010-02-22 Thread Joshua D. Drake
On Mon, 22 Feb 2010 23:19:34 -0800, "Wang, Mary Y"

wrote:
> Hi,
> 
> I used 'pg_dumpall' to dump the database in Postgres 7.1.3 and I'm in
the
> process of restoring the databases in Postgres 8.3.8.  There were lots
> errors showed on the screen when I ran this command 'psql -e template1
-f
> 21.bak' and it actually displayed the line numbers where the errors
> occurred.  Because there were so many errors, and I wanted the errors to
be
> logged to a log file so I ran this command 'psql -e template1 -f 21.bak
>
> late22error.txt', but late22error.txt didn't log the errors, it just
logged
> the transactions.  What option should I use to tell psql to log all
errors
> to a log file?
> 
> I'm running on RHEL 3.9.

Wow that takes us back aways. Did you dump the 7.1 with the 8.3 version of
pg_dump? That should help.

> 
> Thanks
> Mary

-- 
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to Output PSQL Errors to a Log File?

2010-02-22 Thread John R Pierce

Wang, Mary Y wrote:

Hi,

I used 'pg_dumpall' to dump the database in Postgres 7.1.3 and I'm in the process 
of restoring the databases in Postgres 8.3.8.  There were lots errors showed on 
the screen when I ran this command 'psql -e template1 -f 21.bak' and it actually 
displayed the line numbers where the errors occurred.  Because there were so many 
errors, and I wanted the errors to be logged to a log file so I ran this command 
'psql -e template1 -f 21.bak > late22error.txt', but late22error.txt didn't log 
the errors, it just logged the transactions.  What option should I use to tell 
psql to log all errors to a log file?

I'm running on RHEL 3.9.


  
you need to redirect stderr, see 
http://www.cyberciti.biz/faq/redirecting-stderr-to-stdout/  for a 
variety of options






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to Output PSQL Errors to a Log File?

2010-02-22 Thread Nilesh Govindarajan

On 02/23/2010 12:54 PM, Nilesh Govindarajan wrote:

On 02/23/2010 12:49 PM, Wang, Mary Y wrote:

Hi,

I used 'pg_dumpall' to dump the database in Postgres 7.1.3 and I'm in
the process of restoring the databases in Postgres 8.3.8. There were
lots errors showed on the screen when I ran this command 'psql -e
template1 -f 21.bak' and it actually displayed the line numbers where
the errors occurred. Because there were so many errors, and I wanted
the errors to be logged to a log file so I ran this command 'psql -e
template1 -f 21.bak> late22error.txt', but late22error.txt didn't log
the errors, it just logged the transactions. What option should I use
to tell psql to log all errors to a log file?

I'm running on RHEL 3.9.

Thanks
Mary






psql <...your options...> 2> errorfile.txt

This is the general shell error redirection. This will work if psql
outputs errors to stderr and not stdout. (Try it out ;))




--
Nilesh Govindarajan
Site & Server Adminstrator
www.itech7.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how do I do dump and restore without bugging with constraint?

2010-02-22 Thread Scott Marlowe
On Mon, Feb 22, 2010 at 11:39 PM, Net Tree Inc.  wrote:
> Ok, I am just trying to find the proper way to back and restore database
> that contain restriction.
> The other way to ask. If I have two 99% schema similar databases. The old
> one might contain few columns that does not exist in the new one and contain
> constrains that the new DB does not have. In that case, what's standard to
> do this?
> Since they are 99% similar in schema, can I do "data dump" only ? will that
> work?
> If owner for database between Old and new DB are different, do I must add
> that owner in by create new role?
> appreciated,

What I would do is to dump and restore the db as a whole which will
take care of constraint issues.  Then I would alter the tables to
match the new schema.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to Output PSQL Errors to a Log File?

2010-02-22 Thread Wang, Mary Y
No. I dumped the 7.1 with the 7.1 version of pg_dump.
Is it better to dump with the 8.3 version of pg_dump?  I thought I read it some 
where in the mailing lists.  I don't know how I would be able to dump with the 
8.3 version of pg_dump.  The database and Postgres 7.1 is loaded on the current 
production box and Postgres 8.3.8 is loaded on a dev server that I plan to 
migrate the database to.

Please advise.
Mary 

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Joshua D. Drake
Sent: Monday, February 22, 2010 11:25 PM
To: Wang, Mary Y
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How to Output PSQL Errors to a Log File?

On Mon, 22 Feb 2010 23:19:34 -0800, "Wang, Mary Y"

wrote:
> Hi,
> 
> I used 'pg_dumpall' to dump the database in Postgres 7.1.3 and I'm in
the
> process of restoring the databases in Postgres 8.3.8.  There were lots 
> errors showed on the screen when I ran this command 'psql -e template1
-f
> 21.bak' and it actually displayed the line numbers where the errors 
> occurred.  Because there were so many errors, and I wanted the errors 
> to
be
> logged to a log file so I ran this command 'psql -e template1 -f 
> 21.bak
>
> late22error.txt', but late22error.txt didn't log the errors, it just
logged
> the transactions.  What option should I use to tell psql to log all
errors
> to a log file?
> 
> I'm running on RHEL 3.9.

Wow that takes us back aways. Did you dump the 7.1 with the 8.3 version of 
pg_dump? That should help.

> 
> Thanks
> Mary

--
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   >