[GENERAL] book on advanced postgresql?

2003-10-03 Thread John Wells
Are there any books out there that are fairly current, but go into the
more advanced aspects of Postgresql, such as internals and advanced
programming?  I'd rather avoid the whole intro to SQL with a few extra
chapters on Postgresql if I could help it.

Thanks,

John



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] LC_COLLATE=C not working

2003-10-03 Thread Robert Wille
> Robert Wille writes:
> 
> > I have tried to do the same on my production server, and when I do the
> > initdb, it says that LC_COLLATE is C, but it does not sort the same as
> > the test server. Namely, on the test server 'z' < '~' and on the
> > production server 'z' > '~'.
> 
> You probably still have LC_ALL set to something else.  LC_ALL overrides
> LC_COLLATE and friends, which in turn override LANG.
> 
> -- 
> Peter Eisentraut   [EMAIL PROTECTED]
> 
Nope. Any other ideas?

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] book on advanced postgresql?

2003-10-03 Thread Tom Lane
"John Wells" <[EMAIL PROTECTED]> writes:
> Are there any books out there that are fairly current, but go into the
> more advanced aspects of Postgresql, such as internals and advanced
> programming?  I'd rather avoid the whole intro to SQL with a few extra
> chapters on Postgresql if I could help it.

I'm not aware of any that are tilted towards advanced stuff.  Erm, have
you tried just reading the documentation :-) ?  I realize it's not
always easy to find what you need, but we could use suggestions about
how to make it better ...
http://developer.postgresql.org/docs/postgres/index.html

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] book on advanced postgresql?

2003-10-03 Thread John Wells
Tom,

I'd love to, but I can't read at a terminal for longer than an hour or so.
 I really prefer the dead tree version.

Does anyone provide printed copies of the documentation for a fee?  I know
some companies do this for other apps...

Thanks,

John

Tom Lane said:
> "John Wells" <[EMAIL PROTECTED]> writes:
>> Are there any books out there that are fairly current, but go into the
>> more advanced aspects of Postgresql, such as internals and advanced
>> programming?  I'd rather avoid the whole intro to SQL with a few extra
>> chapters on Postgresql if I could help it.
>
> I'm not aware of any that are tilted towards advanced stuff.  Erm, have
> you tried just reading the documentation :-) ?  I realize it's not
> always easy to find what you need, but we could use suggestions about
> how to make it better ...
> http://developer.postgresql.org/docs/postgres/index.html
>
>   regards, tom lane
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
>http://archives.postgresql.org
>


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] book on advanced postgresql?

2003-10-03 Thread Shridhar Daithankar
John Wells wrote:
Tom,

I'd love to, but I can't read at a terminal for longer than an hour or so.
 I really prefer the dead tree version.
You can get the tarball of HTMLs out of a build and view it in lynx or something 
like that if you prefer. They are very simple HTMLs.

I agree that when eyes are tired of looking at a CRT, printed copy works much 
better.

 Shridhar

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] book on advanced postgresql?

2003-10-03 Thread Bruno Wolff III
On Fri, Oct 03, 2003 at 10:08:43 -0400,
  John Wells <[EMAIL PROTECTED]> wrote:
> Tom,
> 
> I'd love to, but I can't read at a terminal for longer than an hour or so.
>  I really prefer the dead tree version.
> 
> Does anyone provide printed copies of the documentation for a fee?  I know
> some companies do this for other apps...

You should be able to print the documentation on a printer. The released
versions come with postscript and/or pdf versions of the documentation.
The beta versions don't and it is a pain to generate the above. However,
if you just want 7.3 documentation, that should be available.
I think the 7.4 documentation is better, because it is all in one book
and some redundant information is eliminated. However, I don't know of
any postscript or pdf versions of the documentation that are available for
download.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] pg_restore takes ages

2003-10-03 Thread Vivek Khera
> "SD" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:

SD> You can try creating index/triggers first and load the data. At the
SD> end it will take a while before you get a usable database with either
SD> approach but see what works faster for you.

The triggers and FK's don't do much at the time they are created.
They work upon update/insert/delete of data.

SD> footprint. I dropped the table and recreated it. Also created index
SD> before loading data. The loading was slow with this approach but it
SD> finished in 3 hours. And I had an updated index as well. Just had to
SD> run vacuum over it.

I cannot believe that this was faster than load data followed by
create index.  Perhaps you needed to bump sort_mem so the index could
be created more efficiently.  I also find that bumping up
checkpoint_segments to a high number speeds things up considerably.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_restore takes ages

2003-10-03 Thread Shridhar Daithankar
Vivek Khera wrote:
SD> footprint. I dropped the table and recreated it. Also created index
SD> before loading data. The loading was slow with this approach but it
SD> finished in 3 hours. And I had an updated index as well. Just had to
SD> run vacuum over it.
I cannot believe that this was faster than load data followed by
create index.  Perhaps you needed to bump sort_mem so the index could
be created more efficiently.  I also find that bumping up
checkpoint_segments to a high number speeds things up considerably.
Well, In my case speed wasn't the issue. I put $PGDATA on a 12/13GB partition 
and loaded 3GB of table. When I went to create index, it ran out of rest of the 
free space which was close to 9GB. Actually I killed it because when it started 
it had 9GB free and when I killed it, there was only 150MB free left.

Oracle had same problems. With tablespaces set to auto extent it ate huge amount 
of space.

I posted this earlier and Tom remarked it the same, saying that it should be 
same one way or other.

Anyway the project abandoned all the database and went to in memory structures..:-)

 Shridhar

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Type of application that use PostgreSQL

2003-10-03 Thread Tom Lane
Shridhar Daithankar <[EMAIL PROTECTED]> writes:
> Peter Childs wrote:
>>> Postgresql do have update logs in form of WAL.

>> No it does not. WAL is Down-Date Logs not update logs. WAL will 
>> enable you to rewind to the beginning of all currently running 
>> transactions after a crash. Ie roll-back not roll-forward.

> Right.

Wrong.  Peter, don't state something so authoritatively when you
obviously haven't looked at the code.  Postgres does not do roll-back,
ever.  (We don't need it because of MVCC.)  We use WAL for roll
*forward* from the last checkpoint after a crash.  Any updates that
didn't make it to disk before the crash are restored from WAL.

All that we basically need for PITR is to provide management code that
lets old WAL segments get archived off to tape (or wherever) rather than
deleted, plus some kind of control that lets the roll-forward process be
stopped at the desired point-in-time rather than necessarily running to
the end of the available WAL data.  This isn't a trivial amount of code,
but there's no great conceptual difficulty either.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Discussion about inheritance

2003-10-03 Thread Tom Lane
"Marcelo Soares" <[EMAIL PROTECTED]> writes:
> To me, this looks not like a bug, because the tuple is not at A phisically.

A lot of people call it a bug because it makes it impossible for them to
use inheritance the way they want to.

> Like I said, this is not a problem for me, but I want to know if someone
> thinks this can be considered a real problem, or if something will be done
> to try to correct (or modify) this operation.

It is likely that it will get changed at some point in the future,
because the consensus seems to be that the other behavior would be more
useful.  However, fixing it seems to require building indexes across
multiple tables --- or else a drastic change in the storage
representation of inherited tables --- so it won't happen soon.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] [ODBC] FUD!! ODBC will not be supported by Microsoft in the future

2003-10-03 Thread Amanjit Gill
Hi, 
this might be actually offtopic, but its always time to fight some FUD  :-)

> Microsoft will be doing away with the OLEDB to ODBC bridge in the near
> future.

Funny enough if I use __Microsoft__ SQL Query Analyzer which directly
connects to SQL Server and somehow shut the SQL Server down while a query is
running I get the following message (german):
_
Server: Nachr.-Nr. 17, Schweregrad 16, Status 1, Zeile 0
[Microsoft][ODBC SQL Server Driver][Shared Memory]SQL Server existiert nicht
oder Zugriff verweigert.
Server: Nachr.-Nr. 2, Schweregrad 16, Status 1, Zeile 0
[Microsoft][ODBC SQL Server Driver][Shared Memory]ConnectionOpen
(Connect()).
_

Which means Microsoft uses ODBC itself for its internal DB Access in SQL
Query Analyzer, and NOT OLE DB.

Now if they use this for their native DB Access (and NOT OLE DB) , it must
be perfect for
me.

Nice try anyway.


-- 
NEU FÜR ALLE - GMX MediaCenter - für Fotos, Musik, Dateien...
Fotoalbum, File Sharing, MMS, Multimedia-Gruß, GMX FotoService

Jetzt kostenlos anmelden unter http://www.gmx.net

+++ GMX - die erste Adresse für Mail, Message, More! +++


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Resources for Progress conversion...

2003-10-03 Thread John Wells
Anyone on the list have experience with this?  I (potentially) could be
hired today to help in an effort to convert from Progress to Postgresql,
and I'm looking for any resources/insights/urls/books/etc that might aid
towards this end.

Anything you can forward will be greatly appreciated (and paid for in beer
if you're in the atlanta area ;-)).

Thanks!
John

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Tsearch2 Causing Backend Crash

2003-10-03 Thread psql-mail
After applying the patches supplied so far and also trying the lastest 
stable tar.gz for tsearch2 ( downloaded 24th of september)

I am still experiencing the same issue as previously described:
I try to do a 

SELECT to_tsvector( 'default', 'some text' )

The backend crashes.

SELECT to_tsvector( 'default',  )
does not crash

Any more advice or suggestions?

Thanks!

-- 

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Migrating Access to Postgres

2003-10-03 Thread Jeff Eckermann

--- Fabrizio Mazzoni <[EMAIL PROTECTED]> wrote:
> Hello .. i did the migration and my advice is that
> you forget about the automatic translation tools... 

I have used the "migration wizard" that comes as a
plugin for PgAdminII, and was very happy with the
result.  Yes, you can do everything by hand, but why?

> 
> I took the access database and recreated all af the
> tables and relationships between them manually in
> postgres. I also added all the necessary indexes at
> design time  ..
> Basiclly you have to open each table in access in
> desgn mode and recreate it in postgres (maybe first
> write it in a text file) with sql commands ...eg:
> 
> create table foo (a serial primary key, b varchar)
> ..
> 
> After that you have to migrate the queries and
> recreate them as views in PG...
> 
> Then i dumped all the access tables to csv files and
> reimported them in pg with the copy command. Another
> solution to export the data from access to PG would
> be to link all the pg tables in access and execute
> an insert query from access This is a faster
> solution but sometimes access can run out of memory
> or you can get  differences and errors in the
> datatypes which are very annoying...
> 
> I kept access only as a frontend and beleive me this
> solution gave us a huge boost in production in our
> company ...
> 
> Best Rgeards,
> 
> Fabrizio Mazzoni
> 
> On Wed, 1 Oct 2003 09:23:44 -0600 (CST)
> "Bernardo Robelo" <[EMAIL PROTECTED]> wrote:
> 
> > Hi,
> > I am interested in migrating Microsoft Access
> database to Postgres
> > database. But I do not have idea of like
> initiating.  Maybe some tool
> > exists for this problem.
> > Thanks you.
> > Bernardo
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > ---(end of
> broadcast)---
> > TIP 1: subscribe and unsubscribe commands go to
> [EMAIL PROTECTED]
> 
> ---(end of
> broadcast)---
> TIP 7: don't forget to increase your free space map
settings


__
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


RE : [GENERAL] mod_auth_pgsql 2.0.1 don't close the backend connection

2003-10-03 Thread Bruno BAGUETTE
> Yes, the mod_auth_pgsql series for Apache 2 is broken as you 
> describe.  It does not release the connection, but the change 
> is intentional.  Looking at the Changelog I see:
> 
> - now we reuse database connection,  2x speedup !!
> 
> Perhaps this is a good thing with different MPMs, but the 
> standard prefork module can exhaust the available connections 
> quite quickly as you describe.  Here is a patch for 
> mod_auth_pgsql 2.0.1 that reverts it to the previous behavior 
> of a new connection per authentication.

I'm getting many errors when I do the make on the patched files. Is
there other people that have compiled successfully the patched
mod_auth_pgsql 2.0.1 ?

I've attached to this email the error log. Do you have the same errors
than me or do I have missed something ?
 
> I have added the maintainer to the cc: list, perhaps he has 
> some input?

I don't know, I've written to the maintainer but until now I never
received any answer from him. Is there a CVS somewhere with that project
? 

Regards,

---
Bruno BAGUETTE - [EMAIL PROTECTED] 


make_errors.log
Description: Binary data

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Resources for Progress conversion...

2003-10-03 Thread John Wells
Sorry about the repost here guys... I sent this from my other email
(non-subscribed), but sent an email to the list admin before reposting to
get him/her to cancel it.  Apparently some miscommunication there. :)


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Can SQL return a threaded-comment-view result set?

2003-10-03 Thread Chris
[I also posted this to comp.databases but since I'm actually using
PostgreSQL I would be content with a pgsql specific answer, so I
also posted this to comp.databases.postgresql.novice because I'm
a novice and it looked like a good group, but then I realized that
this might not be a novice question even though I'm a novice, so
I guess this group comp.databases.postgresql.general is the last
that I'll try.] 

I think I already know that the answer is that this can't be done, but
I'll ask anyways.

Suppose you want to use an RDBMS to store messages for a threaded
message forum like usenet and then display the messages. A toy table
definition (that I've tried to make standards compliant) might look
like:

create table messages (
   message_id integer,
   in_reply_to integer,
   created date,
   author varchar(20),
   title varchar(30),
   message varchar(256),
   primary key (message_id)
);


The in_reply_to field, if not null, means that the message is a reply
to the message with the message_id it has stored. Suppose now that we
populate the database with a 5 message discussion.


insert into messages values
 (1, null, '2003-09-01', 'John', 'Favorite DB?',
 'What is your favorite database?');
insert into messages values
 (2, null, '2003-09-02', 'Mike', 'New DB2 benchmarks',
 'I just posted some new DB2 benchmarks.');
insert into messages values
 (3,1, '2003-09-03', 'Mike', 'Re: Favorite DB?',
 'I\'d say DB2.');
insert into messages values
 (4,1, '2003-09-05', 'Dave', 'Re: Favorite DB?',
 'I\'m an Oracle man myself.');
insert into messages values
 (5,3, '2003-09-07', 'John', 'Re: Favorite DB?',
 'DB2? I thought you liked free databases?');


If we rendered an oldest-first threaded view of the discussion it
would look like:

Author: John
Title:  Favorite DB?
Date:   2003-09-01
What is your favorite database?

Author: Mike
Title:  Re: Favorite DB?
Date:   2003-09-03
I'd say DB2.

Author: John
Title:  Re: Favorite DB?
Date:   2003-09-07
DB2? I thought you liked free databases?.

Author: Dave
Title:  Re: Favorite DB?
Date:   2003-09-05
I'm an Oracle man myself.

Author: Mike
Title:  New DB2 benchmarks
Date:   2003-09-02
I just posted some new DB2 benchmarks.


My question is: is it possible to use pure SQL to return a result set
that would make rendering a threaded view like the above really easy?
That is, is there an SQL query that would return something like:


 i | r |  created   | auth |   title|  message  | nesting 
---+---++--++---+-
 1 |   | 2003-09-01 | John | Favorite DB?   | What is y | 0
 3 | 1 | 2003-09-03 | Mike | Re: Favorite DB?   | I'd say D | 1
 5 | 3 | 2003-09-07 | John | Re: Favorite DB?   | DB2? I th | 2
 4 | 1 | 2003-09-05 | Dave | Re: Favorite DB?   | I'm an Or | 1
 2 |   | 2003-09-02 | Mike | New DB2 benchmarks | I just po | 0


If I had an SQL query that could return that then it would be very
easy to have a computer program print threaded views like the one
above.

If this can't be done, then do any of you have recommendations about
the best way to accomplish this with the least amount of inefficient
back-and-forth between the database and, say, Java or some other
language?

Thank you very much in advance for any answers! This has been a
frustrating matter for me.

Chris

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] migrate from postgres to mysql

2003-10-03 Thread Fabio Benavides Murillo
Hello

I have a problem, I want to migrate the data of a postgres' database to a
mysql's database, this because i need to move to a windows.

I tried making a postgres' backup using pg_dump, but this file doesn't have
the insert in Transac-sql.

Any help is welcome

txs in advance!!


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Can SQL return a threaded-comment-view result set?

2003-10-03 Thread David W Noon
On Thursday 02 Oct 2003 09:13 in
<[EMAIL PROTECTED]>, Chris
([EMAIL PROTECTED]) wrote:

> Suppose you want to use an RDBMS to store messages for a threaded
> message forum like usenet and then display the messages. A toy table
> definition (that I've tried to make standards compliant) might look
> like:
> 
> create table messages (
>message_id integer,
>in_reply_to integer,
>created date,
>author varchar(20),
>title varchar(30),
>message varchar(256),
>primary key (message_id)
> );
> 
> 
> The in_reply_to field, if not null, means that the message is a reply
> to the message with the message_id it has stored. Suppose now that we
> populate the database with a 5 message discussion.

You will need a second table, called a path enumeration table. Joe Celko
wrote up this technique in his book "SQL For Smarties".

I think I can dig up some sample SQL for you, as I used this technique
several times a few years ago -- although on DB2 rather than PostrgeSQL.
Since the SQL can be a bit intricate, I have set follow-ups to
comp.databases.postgresql.sql, as it would be more on-topic there.

However, I recommend Joe Celko's book, as it explains the technique as well
as demonstrates it.
-- 
Regards,

Dave  [RLU#314465]
==
[EMAIL PROTECTED] (David W Noon)
Remove spam trap to reply via e-mail.
==

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Slow SELECT

2003-10-03 Thread psql-mail
I am running a SELECT to get all tuples within a given date range. This 
query is much slwoer than i expected - am i missing something?

I have a table 'meta' with a column 'in_date' of type timestamp(0), i 
am trying to select all
records within a given date range. I have an index on 'in_date' and I 
also have an index on date(in_date). The queries I am doing are between 
dates rather than timestamps.

Immeadiately prior to running the queries shown below a VACUUM ANALYZE 
was run.

The query planner seems to be out by a factor of 10 for cost and number 
of rows. Is this this because of the slow performance? 

I have 6 million records. With dates spread fairly evenly between the 
end of 2001 and now.

I was very suprised to see the query take over 20 minutes when using 
the date(in_date) index. And more suprised to see the seq_scan over in_
date using timestamps take only 10 minutes.

Both are taking too long in my opinion! I was hoping for less than 10 
seconds. Is this too optimistic?

Any suggestions much appreciated.

I am using RH_AS_3 on IBM x450 quad xeon ia64, 4GB mem (1GB shared 
buffers for
postmaster)
When running queries the processor its running on sits down at 15-20% 
usage and the iowait goes up to 80-99% (fiber attached raid(0) yes i 
know its not resiliant).

testdb=# EXPLAIN ANALYZE  SELECT item_id, in_date FROM meta WHERE date(
in_date) >= '2002-03-01' AND date(in_date) < '2002-04-01' order by in_
date DESC;
QUERY PLAN  
  


--
Sort  (cost=122755.65..122830.64 rows=29996 width=50) (actual time=
1248326.17..1248608.39 rows=261305 loops=1)
Sort Key: in_date
->  Index Scan using meta_in_date_date_index on meta  (cost=0.00..
120525.09 rows=29996 width=50) (actual time=0.00..1244835.94 rows=
261305 loops=1)
Index Cond: ((date(in_date) >= '2002-03-01'::date) AND (date(in_date) < 
'2002-04-01'::date))
Total runtime: 1248887.70 msec
(5 rows)


Here are the stats on the in_date column if they're any use...

testdb=# SELECT * FROM pg_stats WHERE tablename = 'meta' and attname = '
in_date';
schemaname | tablename | attname  | null_frac | avg_width | n_distinct |
most_common_vals | most_common_freqs |  
histogram_bounds  | 
correlation
+---+--+---+---+
+--+---+


-+--

public | meta | in_date | 0 | 8 | -1 |  
|   | {"2001-10-18 17:28:23","2001-12-28 19:31:06","
2002-03-14 19:59:08","2002-05-27 08:28:04","2002-07-31 14:06:06","2002-
10-09 19:09:49","2002-12-21 03:58:46","2003-03-02 21:41:37","2003-05-09 
16:12:39","2003-07-22 05:13:18","2003-09-30 13:48:04"} | -0.000184019
(1 row)


Here is the same query as above but using timestamp(0)'s instead of 
dates. 

testdb=# EXPLAIN ANALYZE  SELECT item_id, in_date FROM meta WHERE in_
date >= '2002-03-01' AND in_date < '2002-04-01' order by in_date DESC;  
QUERY PLAN



Sort  (cost=797371.98..797995.09 rows=249246 width=50) (actual time=
616906.25..617183.58 rows=261305 loops=1)
Sort Key: in_date
->  Seq Scan on meta  (cost=0.00..775030.55 rows=249246 width=50) (
actual time=19.53..611541.03 rows=261305 loops=1)
Filter: ((in_date >= '2002-03-01 00:00:00'::timestamp without time zone)
AND (in_date < '2002-04-01 00:00:00'::timestamp without time zone))
Total runtime: 617446.29 msec
(5 rows)


-- 

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Query FKey Constraint Table and Column Names

2003-10-03 Thread Ian Harding
I need to find out which tables, and columns in that table, reference a 
given table.

I would like to use the information_schema views, but am stuck at 7.3.X 
right now.

Here is what I came up with so far
SELECT pcl.relname AS thistable,
   pcl1.relname AS referencesthistable
FROM ((pg_constraint pc JOIN pg_class pcl
   ON ((pc.conrelid = pcl.oid)))
   JOIN pg_class pcl1 ON ((pc.confrelid = pcl1.oid)));
which is fine as far as it goes.  What is not readily apparent is how to 
get the column names.  I see there are conkey and confkey in 
pg_constraint, which seem to be arrays of integers that might just 
relate to attnum in pg_attribute.  

Is there already a view/function that will get me this info, and if not, 
am I barking up the right tree here?  I seldom tinker in system tables, 
because I am too lazy to learn them, and because it seems to be bad 
form.  They are none of my business, really.

Looking forward to the information_schema crutch!

Ian



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: RE : [GENERAL] mod_auth_pgsql 2.0.1 don't close the backend

2003-10-03 Thread Kris Jurka


On Fri, 3 Oct 2003, Bruno BAGUETTE wrote:

> I'm getting many errors when I do the make on the patched files. Is
> there other people that have compiled successfully the patched
> mod_auth_pgsql 2.0.1 ?
>
> I've attached to this email the error log. Do you have the same errors
> than me or do I have missed something ?

It cannot find the necessary pg header files.  If you haven't installed
them you need to do so.  If you've installed them someplace other than
/usr/local/pgsql/include you need to adjust the Makefile to reflect that.

> I don't know, I've written to the maintainer but until now I never
> received any answer from him. Is there a CVS somewhere with that project

Not that I know of.

Kris Jurka


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Tsearch2 Causing Backend Crash

2003-10-03 Thread Oleg Bartunov
Hmm,

it's weird. Could you provide us with backtrace  ?

Oleg
On Tue, 30 Sep 2003 [EMAIL PROTECTED] wrote:

> After applying the patches supplied so far and also trying the lastest
> stable tar.gz for tsearch2 ( downloaded 24th of september)
>
> I am still experiencing the same issue as previously described:
> I try to do a
>
> SELECT to_tsvector( 'default', 'some text' )
>
> The backend crashes.
>
> SELECT to_tsvector( 'default',  )
> does not crash
>
> Any more advice or suggestions?
>
> Thanks!
>
>

Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Server recommendations

2003-10-03 Thread Christopher Browne
Oops! [EMAIL PROTECTED] (Dennis Gearon) was seen spray-painting on a wall:
> Anyone got links to good db server boxes,  not rackmount though?
>
> Include any for HP, Gateway, etc.

It's the components that matter moreso than the sticker on the front
of the box.

Presently, I have a Dell 6600 PowerEdge (I think that's the model)
named "hathi" (apparently that's Hindi for "elephant") under my desk;
4 Pentium IV's, 12 72GB drives, 8GB RAM, and one of those MegaRAID
controllers that has been discussed lately.  It's being used for "data
warehouse" stuff, but I suspect it would make a better TP server than
anything else we've got locally.

It was consistently taking about 6-7 minutes to run a data load that,
on the "production" TP server (2 CPUs, only a couple SCSI drives),
took about 2.5h to load.  Arguably, I should have done the load on
"hathi," and FTPed a tarball of the resulting database to production,
as that almost certainly would have saved a couple of hours!  (There
were good reasons to not imagine that as a rational answer, but it's a
fun idea...)

If it weren't that the heat and the noise of the array of 6 fans was
annoying everyone around me, I'd like to keep it as my desktop box,
but any time I voice the suggestion, everyone objects vigorously :-).

But seriously, look for the right components, and make sure you have
goodly redundancy of things like fans and power supplies.  Battery
backed cache on a RAID controller is just _gold_, performance-wise;
more RAM and more SCSI disks are always a nice thing too.
-- 
If this was helpful,  rate me
http://www.ntlug.org/~cbbrowne/sap.html
I am not a number!
I am a free man!

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Postgres vs. Progress performance

2003-10-03 Thread Christopher Browne
[EMAIL PROTECTED] ("John Wells") writes:
> To that end, I've also started studying up on Postgresql.  It seems to
> have all the necessary features for a transaction heavy DB.  The recent
> release is 7.3.  Of course, "the proof will be in the pudding."  We
> average 2.5 million transactions per day or 800 per second.
> Unfortunately, we would have no way of testing that until we committed to
> getting the business logic moved over and had something to test it with.
> This is a bit of a "catch 22" situation.  Just wished I knew of someone
> locally who was running Postgresql in such a heavy environment.  I'd love
> to find out how it performs for them. ---

The killer question is of what exactly it is that is being done 800
times per second.

I have seen PostgreSQL handling tens of millions of "things" per day,
when those things are relatively small and non-interacting.  If most
of the 800 are read-only, then that seems not at all frightening.

If the activity is update-heavy, with complex interactions, then the
"level of challenge" goes up, irrespective of what database system you
plan on using.

It would seem surprising for a well-run PostgreSQL site to not be
quite readily as capable as Progress on similar hardware, but it is
not a trivial task to verify that with something resembling your kind
of transaction load.

What you, in effect, need to do is to construct a prototype and see
how it holds up under load.  That's a nontrivial amount of work,
irrespective of the database in use.

I think you'll need to construct that prototype, perhaps as a set of
scripted "clients" that you can spawn to hammer at your "server."  A
wise approach is to write this in a somewhat generic fashion so that
you can try it out on several different databases.  Or so that you can
at least express, to management, the possibility of doing so :-).

Question: What kind of hardware are you using for the present system?
-- 
output = reverse("ofni.smrytrebil" "@" "enworbbc")

Christopher Browne
(416) 646 3304 x124 (land)

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] validation of postgresql functions

2003-10-03 Thread giant food
Hi, I'm in the process of converting an Oracle database into
postgresql. One thing I've noticed is that postgresql functions keep
the entire function body in a string, e.g.:

CREATE OR REPLACE FUNCTION f_do_something(varchar, numeric, varchar)
  RETURNS varchar AS
'
...
'
  LANGUAGE 'plpgsql' VOLATILE;

The problem I'm having with this is that I can never find any errors
in my functions until runtime, unlike with Oracle where the "create
function" script will fail if, for example, I misspelled a table name.

Is there any way to force postgresql to compile functions, without
running them first?

--Frank

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Schema backup

2003-10-03 Thread Greg
I also have troubles with pg_dump. I have a database called 'shipping' and
it has many tables(I populated them via a script). To dump the database I
did these steps:

su greg and then pg_dump > /tmp/greg.dmp. A file greg.dmp gets created but
it has nothing in it(0 bytes). Could you advise what was wrong, please?

Thanks in advance, Greg


""Claudio Lapidus"" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hello all
>
> I need to transfer a database installation from one host to another. I
need
> to dump all users, databases, schemas, stored procedures, triggers, etc.
but
> no actual data at all. What I try to achieve is a fresh clone ready to
run.
> I tried pg_dump -s and pg_dumpall, but somewhere I'm missing something,
so:
> what is the procedure to dump all database structure alone, and what is
the
> proper reload procedure?
>
> TIA
> cl.
>
> _
> The new MSN 8: smart spam protection and 2 months FREE*
> http://join.msn.com/?page=features/junkmail
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
>



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] book on advanced postgresql?

2003-10-03 Thread Tom Lane
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> On Fri, Oct 03, 2003 at 10:08:43 -0400,
>   John Wells <[EMAIL PROTECTED]> wrote:
>> Does anyone provide printed copies of the documentation for a fee?  I know
>> some companies do this for other apps...

> You should be able to print the documentation on a printer. The released
> versions come with postscript and/or pdf versions of the documentation.
> The beta versions don't and it is a pain to generate the above.

In theory you can build the postscript versions for yourself, per the
procedure here:
http://developer.postgresql.org/docs/postgres/docguide-build.html
after you've installed all the tools listed on the preceding page.

Building the HTML version is pretty simple, but it does look like
making a nice printed version is tedious ... which, no doubt, is
why no one does it for beta releases.

If you are happy with reading docs for back releases, Prime Time
Freeware has PDF and hardcopy versions I believe:
http://www.ptf.com/

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] LC_COLLATE=C not working

2003-10-03 Thread Tom Lane
"Robert Wille" <[EMAIL PROTECTED]> writes:
>> You probably still have LC_ALL set to something else.  LC_ALL overrides
>> LC_COLLATE and friends, which in turn override LANG.

> Nope. Any other ideas?

Please use pg_controldata to verify the LC_ settings on both databases.
I suspect they are not really both C.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Server recommendations

2003-10-03 Thread Shridhar Daithankar
Christopher Browne wrote:

Oops! [EMAIL PROTECTED] (Dennis Gearon) was seen spray-painting on a wall:

Anyone got links to good db server boxes,  not rackmount though?

Include any for HP, Gateway, etc.


It's the components that matter moreso than the sticker on the front
of the box.
Presently, I have a Dell 6600 PowerEdge (I think that's the model)
named "hathi" (apparently that's Hindi for "elephant") under my desk;
4 Pentium IV's, 12 72GB drives, 8GB RAM, and one of those MegaRAID
controllers that has been discussed lately.  It's being used for "data
warehouse" stuff, but I suspect it would make a better TP server than
anything else we've got locally.
Heh.. "hathi" is quite right description for that..:-)

From what I read on postgresql lists, it seems that opteron based machines are 
doing quite a good job as database server with right kind of disk. You could 
look at any such machine for database server..

Check this..

http://www.polywell.com/us/rackservers/poly1u2c.asp
http://www.opteronics.com/opteron-servers.htm
IMO they could be better machine for databases. Get a 64 bit linux kernel and 
run 32 bit postgresql on it. Should work like a charm..

HTH

 Shridhar



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] updating records

2003-10-03 Thread Gene Vital
Hi all.
I am using Visual FoxPro via ODBC to update records on a 7.3.4 
PostgreSql server and it appears that it always uses delete/insert 
instead of updating the current record. Can this be changed to update 
the existing record instead?
--
Eugene Vital
Any technology indistinguishable from magic is insufficiently advanced.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Schema backup

2003-10-03 Thread Andrew L. Gould
On Tuesday 30 September 2003 09:15 pm, Greg wrote:
> I also have troubles with pg_dump. I have a database called 'shipping' and
> it has many tables(I populated them via a script). To dump the database I
> did these steps:
>
> su greg and then pg_dump > /tmp/greg.dmp. A file greg.dmp gets created but
> it has nothing in it(0 bytes). Could you advise what was wrong, please?
>
> Thanks in advance, Greg
>

You forgot to tell pg_dump the name of the database.  Try:
pg_dump shipping > /tmp/greg.dmp

Or, for a gzipped backup:
pg_dump shipping | gzip -c > /tmp/greg.dmp.gz

Best of luck,

Andrew Gould


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] updating records

2003-10-03 Thread Bruno Wolff III
On Fri, Oct 03, 2003 at 11:27:11 -0400,
  Gene Vital <[EMAIL PROTECTED]> wrote:
> Hi all.
> I am using Visual FoxPro via ODBC to update records on a 7.3.4 
> PostgreSql server and it appears that it always uses delete/insert 
> instead of updating the current record. Can this be changed to update 
> the existing record instead?

Depneding on why you want to do this, it might not help. Postgres
uses MVCC and updates are effectively inserts and deletes.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] group by

2003-10-03 Thread Kris Jurka


On Fri, 3 Oct 2003, Kathy Zhu wrote:

> Hi,
>
> I notices a weird thing here.
>
> version 7.2.1
> on Solaris
>
> table "test", has a field "state".
> There are 4 "state" values, 1, 2, 3, 4.
>
> select count(*) from test group by state;
> took 11500 msec
>
> but
>
> select count(*) from test where state = 1;
> select count(*) from test where state = 2;
> select count(*) from test where state = 3;
> select count(*) from test where state = 4;
> total took 626 msec
>

I believe Solaris's qsort implementation (which is used for the group by)
has problems dealing with large numbers of similar values.

I think in later versions of pg our own qsort is used.

Kris Jurka



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] book on advanced postgresql?

2003-10-03 Thread Peter Eisentraut
Tom Lane writes:

> Building the HTML version is pretty simple, but it does look like
> making a nice printed version is tedious ... which, no doubt, is
> why no one does it for beta releases.

For printable versions, the size and complexity of the PostgreSQL
documentation breaks every tool known to me.  That is the reason why
printable version are so hard to find.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] pg_restore takes ages

2003-10-03 Thread scott.marlowe
On Fri, 3 Oct 2003, Vivek Khera wrote:

> > "sm" == scott marlowe  writes:
> 
> sm> also, turning off fsync during the load helps a lot.  Be sure to turn it 
> sm> back on when you're done of course.
> 
> Only if you don't have a battery-backed cache on your RAID.  If you
> do, it won't likely make a big difference.  For me it was about 2
> seconds over a 4-hour restore.

True, very true.  Have you done the "pull the plug" test on it to make 
sure it really works, by the way?


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] Beta 4 of 7.4

2003-10-03 Thread Joshua D. Drake
Native Win32 is slated for 7.5.

Sincerely,

Joshua Drake

Relaxin wrote:

Does this have native Win32 support?

 

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-222-2783 - [EMAIL PROTECTED] - http://www.commandprompt.com
Editor-N-Chief - PostgreSQl.Org - http://www.postgresql.org


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] pg_restore takes ages

2003-10-03 Thread scott.marlowe
On Fri, 3 Oct 2003, Alvaro Herrera wrote:

> On Fri, Oct 03, 2003 at 01:06:26PM -0600, scott.marlowe wrote:
> 
> > also, turning off fsync during the load helps a lot.  Be sure to turn it 
> > back on when you're done of course.
> 
> I'm not sure I understand why this is so.  If I turn fsync off, it means
> that I won't force the kernel to write WAL logs to disk, but they will
> have to be written eventually.  If you have tons of RAM it may well be
> that the kernel will just keep dirty buffers in RAM, but if not there
> should not be any difference.  Am I missing something?

Yes, you are.  Basically, with fsync on, things have to happen in order.

I.e.

write to WAL what you're gonna do.  WAIT for confirmation on write
write the tuples out.  wait for confirmation
checkpoint the WAL.  wait for confirmation

Notice the wait for confirmation above.  Without fsync, there's no wait, 
you just write it all out at once, and hope the machine / database doesn't 
ever crash in the middle of a transaction.

Give it a try, turn off fsync, run pgbench -c 4 -t 1000, then turn it back 
on and see how much it slows down.

Pull the plug while the transactions are running with fsync on, and your 
machine, assuming it has a meta-data journaling file system, will come 
right back, and postgresql will replay the WAL files and you'll have a 
nice consistent database.

turn off fsync, initiate many transactions, pull the plug, and look at 
your corrupted database refuse to start on update.

Note that if you're running on IDE drives, you already ARE probably 
running with fsync off if write caching is enabled, so you'll need to turn 
it off (hdparm -W0 /dev/hdx in linux) to ensure fsync actually works.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] pg_restore takes ages

2003-10-03 Thread Vivek Khera
> "sm" == scott marlowe  writes:

sm> also, turning off fsync during the load helps a lot.  Be sure to turn it 
sm> back on when you're done of course.

Only if you don't have a battery-backed cache on your RAID.  If you
do, it won't likely make a big difference.  For me it was about 2
seconds over a 4-hour restore.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL Beta4 Tag'd and Bundle'd ...

2003-10-03 Thread Marc G. Fournier


On Fri, 3 Oct 2003, Relaxin wrote:

> Does this have native Win32 support?

As has been mentioned *several* times already, v7.4 will *not* have Win32
support in it.


>
> ""Marc G. Fournier"" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> >
> > Well folks, we are now coming into the home stretch of another long
> > development cycle, and its time for more vigorous and extensive testing
> > ...
> >
> > Last night, we bundled up Beta4, which is the first one that we've
> > officially announced "publicly", with the other 3 having been only
> > announced on -hackers ...
> >
> > This release, depending on the bug reports received, will most likely flow
> > into our first Release Candidate by end of next week, so we encourage
> > every(and any)one that can to download and test her, so that our first
> > Release Candidate can be as clean as possible ...
> >
> > As we did the bundling last night, most mirrors should already have a copy
> > of it available under /pub/source/v7.4 ...
> >
> > We've also started to bundle up bzip2, along with the .gz files, for those
> > that wish to download those ... with appropriate md5 files ...
> >
> >
> >
> > ---(end of broadcast)---
> > TIP 3: if posting/reading through Usenet, please send an appropriate
> >   subscribe-nomail command to [EMAIL PROTECTED] so that your
> >   message can get through to the mailing list cleanly
> >
>
>
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
>

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] pg_restore takes ages

2003-10-03 Thread Tom Lane
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> Yes, you are.  Basically, with fsync on, things have to happen in order.
> I.e.
> write to WAL what you're gonna do.  WAIT for confirmation on write
> write the tuples out.  wait for confirmation
> checkpoint the WAL.  wait for confirmation

Not really.  With fsync on, we *only* sync the WAL writes.  Data writes
can happen whenever, so long as we know the corresponding WAL writes
went down first.  We only wait for data writes to complete before
considering that a checkpoint is complete --- which is something that is
not in the main line of execution and doesn't block other activity.

This is one good reason for keeping WAL on a separate drive from the
data files --- you are then freeing the system to schedule data I/O as
optimally as it can.

> Note that if you're running on IDE drives, you already ARE probably 
> running with fsync off if write caching is enabled, so you'll need to turn 
> it off (hdparm -W0 /dev/hdx in linux) to ensure fsync actually works.

It'd be interesting to think about whether a write-caching IDE drive
could safely be used for data storage, if WAL is elsewhere.

Right offhand I think the only problem is how to know when it's safe
to consider a checkpoint complete.  Maybe all that would be needed is
a long enough time delay after issuing sync(2) in the checkpoint code.
Do these drives guarantee "data will be written within 30 seconds" or
something like that?  Or can the delay be indefinite when load is heavy?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] group by

2003-10-03 Thread Kathy Zhu
Hi,

I notices a weird thing here.

version 7.2.1
on Solaris

table "test", has a field "state".
There are 4 "state" values, 1, 2, 3, 4.

select count(*) from test group by state;
took 11500 msec

but

select count(*) from test where state = 1;
select count(*) from test where state = 2;
select count(*) from test where state = 3;
select count(*) from test where state = 4;
total took 626 msec

Why ??

thanks,
kathy



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] group by

2003-10-03 Thread Nigel J. Andrews
On Fri, 3 Oct 2003, Kathy Zhu wrote:

> Hi,
> 
> I notices a weird thing here.
> 
> version 7.2.1
> on Solaris
> 
> table "test", has a field "state".
> There are 4 "state" values, 1, 2, 3, 4.
> 
> select count(*) from test group by state;
> took 11500 msec
> 
> but
> 
> select count(*) from test where state = 1;
> select count(*) from test where state = 2;
> select count(*) from test where state = 3;
> select count(*) from test where state = 4;
> total took 626 msec
> 
> Why ??


First thought was caching: the disk blocks are cached in memory after the first
qery so all the others just fetch from there.

Second thought: well I haven't really had it but indexes and sorting is sort of
sloshing around in my mind. Not sure how that applies to here since I can't
remember if the first would require the sort step having retrieved all the
tuples and the others would just use the index pages.

It is Friday though.


-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] group by

2003-10-03 Thread Kathy Zhu
I did a vacuum and got the same result.
I think the problem lies in there is swapping going for groupby when there is a 
large number of rows in the table, 5000 in this case.

I guess I have to use group by with caution.

thanks for the all the replies though,
kathy

> Date: Fri, 03 Oct 2003 15:53:02 -0700
> From: Dennis Gearon <[EMAIL PROTECTED]>
> User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.4) 
Gecko/20030624
> X-Accept-Language: en-us, ru, es-mx
> To: Kathy Zhu <[EMAIL PROTECTED]>
> CC: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] group by
> Content-Transfer-Encoding: 7bit
> 
> When was the last time you vacuumed full?
> 
> Kathy Zhu wrote:
> 
> >Hi,
> >
> >I notices a weird thing here.
> >
> >version 7.2.1
> >on Solaris
> >
> >table "test", has a field "state".
> >There are 4 "state" values, 1, 2, 3, 4.
> >
> >select count(*) from test group by state;
> >took 11500 msec
> >
> >but
> >
> >select count(*) from test where state = 1;
> >select count(*) from test where state = 2;
> >select count(*) from test where state = 3;
> >select count(*) from test where state = 4;
> >total took 626 msec
> >
> >Why ??
> >
> >thanks,
> >kathy
> >
> >
> >
> >---(end of broadcast)---
> >TIP 5: Have you checked our extensive FAQ?
> >
> >   http://www.postgresql.org/docs/faqs/FAQ.html
> >
> >  
> >
> 




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] pg_restore takes ages

2003-10-03 Thread scott.marlowe
On Fri, 3 Oct 2003, scott.marlowe wrote:

> On Fri, 3 Oct 2003, Tom Lane wrote:
> > 
> > It'd be interesting to think about whether a write-caching IDE drive
> > could safely be used for data storage, if WAL is elsewhere.
> 
> Well, I just so happen to have a machine with two drives in it.  I'll get 
> back to you on that.

Ok, I just tested it.  I put pg_xlog and pg_clog on a drive that was set 
to write cache disabled, and left the data on a drive where caching was 
enabled.  The tps on a pgbench -c 5 -t 500 on the single drive was 45 to 
55.  With the pg_[xc]log moved to another drive and all, I got up to 108 
tps.  About double performance, as you'd expect.  I didn't test the data 
drive with write caching disabled, but my guess is it wouldn't be any 
slower since pgsql doesn't wait on the rest.

I pulled the plug three times, and all three times the database came up in 
recovery mode and sucessfully recovered.  I didn't bother testing to see 
if write caching would corrupt it as I'm pretty sure it would, it 
certainly did when everything was on one drive.

Would you like to try some kind of wal patch out on it while I've got it 
for testing?  I'd be glad to torture that poor little box some more if 
you're in the mood and the beta period is winding down.  It's running 7.4 
beta3, by the way.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] PostgreSQL Beta4 Tag'd and Bundle'd ...

2003-10-03 Thread Marc G. Fournier

Well folks, we are now coming into the home stretch of another long
development cycle, and its time for more vigorous and extensive testing
...

Last night, we bundled up Beta4, which is the first one that we've
officially announced "publicly", with the other 3 having been only
announced on -hackers ...

This release, depending on the bug reports received, will most likely flow
into our first Release Candidate by end of next week, so we encourage
every(and any)one that can to download and test her, so that our first
Release Candidate can be as clean as possible ...

As we did the bundling last night, most mirrors should already have a copy
of it available under /pub/source/v7.4 ...

We've also started to bundle up bzip2, along with the .gz files, for those
that wish to download those ... with appropriate md5 files ...



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] How to avoid users from viewing functions code

2003-10-03 Thread cad0022
Hi
the problem is to avoid users from viewing functions code (in general db
structures) when connecting to db by means of pgpadmin client.
My db contains several tables and functions and I need to make users viewing
only a restricted
number of tables. It works for tables, as it's possible to define access
policies,
but I didn't find something like that for functions. 
The result is that everyone can see functions code from all pgadmin (II & III)
clients using basic user account.


This is my PostgreSQL Client Authentication Configuration File

local   all all trust
hostall all 127.0.0.1 255.255.255.255   trust
hostall all 10.43.13.75   255.255.255.255   trust
hosttemplate1   basic   0.0.0.0   0.0.0.0   md5
hostserviziobasic   0.0.0.0   0.0.0.0   md5


Is there any idea?

Thanks Paolo Mattioli



---(end of broadcast)---
TIP 8: explain analyze is your friend