Re: [GENERAL] Database cluster?

2000-11-30 Thread Gordan Bobic

> > > I am considering splitting the database into tables residing on
separate
> > > machines, and connect them on one master node.
> > >
> > > The question I have is:
> > >
> > > 1) How can I do this using PostgreSQL?
> >
> > You can't.
>
>I'll jump in with a bit more info.  Splitting tables across multiple
> machines would do nothing more than make the entire system run at a
snail's
> pace.  Yes, it would slow it down immensely, because you just couldn't
move
> data between machines quickly enough.

I don't believe that is the case. In my case, queries typically return
comparatively small amounts of data. Around 100 records at most. The amount
of data that needs to be transferred is comparatively small, and even over
10 Mb ethernet, it would take at most about a second to transfer. This is a
much smaller delay than the query time itself, which can take 10 seconds or
more. Remember that I said there are tables with over 30M records? Doing
multi-table joins on things like this takes a long time...

So, splitting the data in such a way that one table is queried, and then
tables joined from it are queried in parallel, would cause a signifficant
speed-up.

For example, say we have tables T1, T2 and T3.

T1 has fields F1.1, F1.2 and F1.3. T2 has F2.1 and T3 has F3.1 (at least,
probably lots of other fields.

Say I want to do
SELECT *
FROM T1
WHERE F1.1 = F2.1 AND F1.2 = F3.1 AND F1.3 = 'somedata';

Then F1.3 could be searched for 'somedata'. When the records are found,
this could be cross-matched remotely, in parallel for F1.1=F2.1 and
F1.2=F3.1, on different machines.

This means that depending on the type, configuration and the usage of the
database, a potentially massive improvement in performance could be
achiveved, especially on multi-table joins which span lots of BIG tables.

Somebody mentioned the fact that postgres uses IPC for communicating
between processes. I think there are tools for clustering (I am not sure if
Mosix supports transparently allowing IPC across nodes) which can work
around that.

>   Why?  Well, whenever you join two tables that are on different
machines,
> the tables have to go across whatever sort of connection you have between
> the machines.  Even if you use gigabit ethernet, you are still running at
a
> mere fraction of the bandwidth of the computer's internal bus - and at
> orders of magnitude greater latency.  You'd have lots of CPU's sitting
> around, doing absolutely nothing, waiting for data to come across the
wire.

Gigabit ethernet has around the same bandwidth as PCI bus. I suppose it all
depends on what machine you have running this. This would be true in the
case that the datbase server is a nice big Alpha with severl CPUs.

>There are alternatives, such as IP-over-SCSI.  That reduces the
latency
> of ethernet quite a bit, and gives you much more bandwidth (say, up to
160
> megabytes/second).  However, that's still a pittance compared to the main
> system bus inside your computer.

But SCSI is still 160MB burst (not sustained, unless you're using very
expensive arrays). And Gigabit ethernet is 133 MB/s, albeit with greater
latency.

> That's one of the greatest hurdles to distributed computing.  That's
why
> the applications that are best adapted to distributed computing are those
> that don't require much data over the wire - which certainly doesn't
apply
> to databases. : )

I think it depends whether the amount of data is the problem, or fitting it
together.

Somebody please explain to me further why I am wrong in all this?

Regards.

Gordan




Re: [GENERAL] How do I install pl/perl

2000-11-30 Thread Steve Heaven

At 10:02 30/11/00 +0100, Marcin Bajer wrote:
>Steve Heaven wrote:
>> 
>> At 13:06 29/11/00 -0500, Robert B. Easter wrote:
>> >When you compiled PostgreSQL, you have to give ./configure --with-perl
so it
>> >will make the .so file its looking for.  See ./configure --help next time.
>> >
>> 
>> We installed from RPM not source. Do we have to do a re-install from source
>> to get this working ?
>> 
>> Steve
>> 
>
>I think not. There is a RPM called postgresql-perl-*.rpm
>- installing it should be enough. 

We already have that package installed. It provides the Perl Pg module to
interface to a Postgres backend, not to write stored functions.

Steve


-- 
thorNET  - Internet Consultancy, Services & Training
Phone: 01454 854413
Fax:   01454 854412
http://www.thornet.co.uk 



[GENERAL] backup and store oids

2000-11-30 Thread Gabriel Lopez


Hi all, I'm using postgresql-7.0.2 on Linux RedHat 6.2 system.

I need help to some question:

1. I have problem insert oid object in a table, not always,
sometimes. I have the exception

FastPath call returned FATAL 1:  my bits moved right off
Recreate index pg_attribute_relid_attnum_index.

 I also have this problem on Solaris 7. It appear with a simple
table as a:
create table ttest (pkey int8, test oid);

2. When backup my database use:
pg_dump dbname > dbname.pgdump

but when restore it:
cat dbname.pgdump | psql dbname

oid objets are not restore correctly
There's any other way to backup oid objects?

Thanks, Gabi.


--
Gabriel López Millán
Facultad de Informática -Universidad de Murcia
30001 Murcia - España (Spain)
Telf: +34-968-364644 E-mail: [EMAIL PROTECTED]






Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread The Hermit Hacker


Note that this is a Linux limitation ... and even then, I'm not quite sure
how accurate that is anymore ... the *BSDs have supported >2gb file
systems for ages now, and, since IBM supports Linux, I'd be shocked if
there was a 2GB limit on memory, considering alot of IBMs servers support
up to 4 or 8GB of RAM ...

On Thu, 30 Nov 2000, Brian C. Doyle wrote:

> Another thing to rember about PostgreSQL is DB Size vs CPU bits
> 
> 8.1 CPU types - 32-bit or 64-bit
> Performance of 32-bit cpu machines will decline rapidly when the database 
> size exceeds 5 GigaByte. You can run 30 gig database on 32-bit cpu but the 
> performance will be degraded. Machines with 32-bit cpu imposes a limitation 
> of 2 GB on RAM, 2 GB on file system sizes and other limitations on the 
> operating system. Use the special filesystems for linux made by SGI, IBM or 
> HP or ext3-fs to support file-sizes greater than 2 GB on 32-bit linux 
> machines.
> For extremely large databases, it is strongly advised to use 64-bit 
> machines like Digital Alpha cpu, Sun Ultra-sparc 64-bit cpu, Silicon 
> graphics 64-bit cpu, Intel Merced IA-64 cpu, HPUX 64bit machines or IBM 
> 64-bit machines. Compile PostgreSQL under 64-bit cpu and it can support 
> huge databases and large queries. Performance of PostgreSQL for queries on 
> large tables and databases will be several times faster than PostgreSQL on 
> 32-bit cpu machines. Advantage of 64-bit machines are that you get very 
> large memory addressing space and the operating system can support very 
> large file-systems, provide better performance with large databases, 
> support much larger memory (RAM), have more capabilities etc..
> 
> found at http://www.linuxdoc.org/HOWTO/PostgreSQL-HOWTO-8.html
> 
> 
> At 02:50 PM 11/30/00 +, [EMAIL PROTECTED] wrote:
> 
> 
> >I plan to convert a Foxpro system to client/server - hopefully using 
> >PostGreSQL
> >(about 100 tables / 300 mb / 100 users)
> >
> >Firstly I heard a rumour that p-sql doesn't process queries in parellel, i.e.
> >performs them sequentially.
> >Is this true? If so it would surely make it impracticle when more than a few
> >clients are connected
> >I tried this out by running 2 VB programs via ODBC than randomly performed
> >queries - they appeared to work
> >in parallel - however I then started a PSQL session and entered a slow 
> >query it
> >appeared to stop the 2 VB programs until
> >it had completed. Anyone got the answer to this???
> >
> >The other question I have is how much memory I should really have to 
> >support 100
> >connected clients. There must be a formula / rule of thumb for this?
> >
> >I am hoping I can convince my customer to use postgresql but first I need to
> >convince myself it is up to the job :)
> >I am actually pretty impressed with it so far, its got a lot of functionality
> >that DB2 doesn't have
> >
> >Thanks,
> >
> >M Chantler
> >Southampton
> >
> >
> >
> >--
> >NOTICE:  The information contained in this electronic mail transmission is
> >intended by Convergys Corporation for the use of the named individual or 
> >entity
> >to which it is directed and may contain information that is privileged or
> >otherwise confidential.  If you have received this electronic mail 
> >transmission
> >in error, please delete it from your system without copying or forwarding it,
> >and notify the sender of the error by reply email or by telephone 
> >(collect), so
> >that the sender's address records can be corrected.
> 

Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org 




Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Vivek Khera

> "LO" == Lamar Owen <[EMAIL PROTECTED]> writes:

LO> The 2GB size limits of ia32 come in to play due to byte addressing
LO> (versus word addressing) in ia32 plus the use of signed single register
LO> two's-complement integers.

LO> But, as always, I reserve the right to be wrong.

You are wrong.  The file size limit has to do with the data size of
your file offset pointer.  This is not necessarily a 32 bit quantity
on a 32-bit processor.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/



Re: [GENERAL] Can PostGreSQL handle 100 user database - more info

2000-11-30 Thread Adam Lang

Replies inline.

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: <[EMAIL PROTECTED]>
To: "Adam Lang" <[EMAIL PROTECTED]>
Sent: Thursday, November 30, 2000 12:40 PM
Subject: Re: [GENERAL] Can PostGreSQL handle 100 user database - more info


>
>
> I am not sure what an ole db provider is? This must be another method of
> talking to the server from a client application. What advantages does it
have?

Yes.  It is an abstraction layer, but it is Windows only technology.  The
way it works is that The database has an ole db provider (much like ODBC).
ADO connects to the database using the ole db provider.  You write your
application using the ADO object model to interact with the database.
Biggest advantage:  as long as you have an ole db provider for that
datasource, your ADO code is universal.

Example.  You have an application that connects to MS SQL Server using ADO
and you extract data, run queries, etc.  Later you migrate to Oracle.  You
change your connection string (which is one line) and in most cases, you can
run your app without any other changes.  ADO is also able to connect to non
relational data sources:  Text files, VSAM, AS/400, etc.  Plus, the ole db
provider should be made to expose the database schema... so you can
manipulate data in an object oriented way, as well as poll the data source
for structure information.  A lot more information is at microsoft's
website.

Also, in a scenario where the data source does not have an ole db provider,
there is one supplied that will connect through ODBC.

>
> I have the open source ODBC client (and I know a Java version exists), it
seems
> ok but I don't know if it handles things like transactions and other
advanced
> functions.

If the ole db provider is made correctly, it should support anything that
the database allows.  I'm not too familiar with using the postgres ODBC
driver.  For the most part, I've come to the point where I have not really
made too many VB apps with a postgres bckend, due to the fact I have to use
the ODBC driver, which is a bit outdated (but it does work).

>
> It would obviously be important to have a good method of talking to P-sql
from
> Windows since a lot of people will want to do this.

That has been my argument that a good connection method is needed to get
into the Windows arena.  Windows developers are spoiled.  No matter how much
you want to bad mouth MS, they do give us some great development tools.
Unfortunately, postgres doesn't have anything to woo any windows develoeprs
over.




Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Lamar Owen

Vivek Khera wrote:
> LO> The 2GB size limits of ia32 come in to play due to byte addressing
> LO> (versus word addressing) in ia32 plus the use of signed single register
> LO> two's-complement integers.
 
> LO> But, as always, I reserve the right to be wrong.
 
> You are wrong.  The file size limit has to do with the data size of
> your file offset pointer.  This is not necessarily a 32 bit quantity
> on a 32-bit processor.

If the file offset pointer is a signed integer, then it holds.  That is
an OS specific issue as to the type of the pointer.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [GENERAL] Table & Column descriptions

2000-11-30 Thread Joel Burton

\d+  should show you the table schema with comments.
If you're looking for the actual data, it's in pg_description. The 
objoid field matches the oid field in pg_attribute (which is the 
"fields" table for pgsql).

On 30 Nov 2000, at 11:17, Dale Anderson wrote:

>I am able to add table and column descriptions, and I am also able
>to retrieve the table description.  The problem is that I can not
>find a way to retrieve the description comments on table
>columns  Any assistance would be greatly appreciated.
> 
> Dale.
> 


--
Joel Burton, Director of Information Systems -*- [EMAIL PROTECTED]
Support Center of Washington (www.scw.org)



RE: [GENERAL] Unanswered questions about Postgre

2000-11-30 Thread Mikheev, Vadim

> > That is what transactions are for. If any errors occur, then the
> > transacction is aborted. You are supposed to use 
> > transactions when you want either everything to occur
> > (the whole transaction), or nothing, if an error occurs.
> 
>   Yes.  There are certainly times when a transaction needs to be
> ABORTed.  However, there are many reasons why the database should not
> abort a transaction if it does not need to.  There is obviously no
> reason why a transaction needs to be aborted for syntax errors.  There
> is obviously no reason why a transaction needs to be aborted for say,
> trying to insert a duplicate primary key.  The -insert- can 
> fail, report it as such, and the application can determine if a rollback
> is nessasary. If you don't believe me, here's two fully SQL-92 
> compliant databases, Oracle and interbase, which do not exhibit this
behavior: 

Oracle & Interbase have savepoints. Hopefully PG will also have them in 7.2

Vadim



Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Bruce Guenter

On Thu, Nov 30, 2000 at 01:48:43PM -0400, The Hermit Hacker wrote:
> Note that this is a Linux limitation ... and even then, I'm not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs servers support
> up to 4 or 8GB of RAM ...

Correct.  With the 36-bit PAE extensions on PII and above CPUs, Linux
supports up to the full 64GB of physical RAM.  Individual processes are
limited to either 2GB or 3GB (or 3.5GB), depending on the kernel compile
option as to the division point between kernel and user memory.  Linux
also supports >2GB files (the kernel is limited to 2TB IIRC -- 2^32 512
byte blocks).

Of course, on a 64-bit CPU, all these limitations are off, which really
makes them the platform of choice for heavy data manipulation (I/O).
-- 
Bruce Guenter <[EMAIL PROTECTED]>   http://em.ca/~bruceg/

 PGP signature


Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Martin A. Marques

On Thursday 30 November 2000 14:48, The Hermit Hacker wrote:
> Note that this is a Linux limitation ... and even then, I'm not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs servers support
> up to 4 or 8GB of RAM ...

As far as I know, the limitation has been passed recently with kernel 2.4.

Saludos... :-)

-- 
"And I'm happy, because you make me feel good, about me." - Melvin Udall
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Marc SCHAEFER

On Thu, 30 Nov 2000, The Hermit Hacker wrote:

> Note that this is a Linux limitation ... and even then, I'm not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs servers support
> up to 4 or 8GB of RAM ...

Linux 2.2.x on ix86 only supports files upto 2 GB. Linux 2.4.x or any
64-bit plateform (SPARC, Alpha, m68k) fixes this (through the Large File
Summit support, and a new libc).

Memory: Upto 1 GB is supported stock, 2 GB by recompiling kernel. There is
work in progress in 2.4 for supporting the > 32 bit ix86 addressing modes
available in some processors.





Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Adam Lang

Here is a link that explains memory.  It is from a windows2000 magazine, but
it isn't very NT specific.  It speaks in genaralities.  I thought it was a
rather good atrticle.

http://www.win2000mag.com/Articles/Index.cfm?ArticleID=7290

I don't think you need to be a subscriber to read it.

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Vivek Khera" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, November 30, 2000 1:31 PM
Subject: Re: [GENERAL] Can PostGreSQL handle 100 user database?


> > "LO" == Lamar Owen <[EMAIL PROTECTED]> writes:
>
> LO> The 2GB size limits of ia32 come in to play due to byte addressing
> LO> (versus word addressing) in ia32 plus the use of signed single
register
> LO> two's-complement integers.
>
> LO> But, as always, I reserve the right to be wrong.
>
> You are wrong.  The file size limit has to do with the data size of
> your file offset pointer.  This is not necessarily a 32 bit quantity
> on a 32-bit processor.
>
> --
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Vivek Khera, Ph.D.Khera Communications, Inc.
> Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
> AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/




Re: [GENERAL] Database cluster?

2000-11-30 Thread Doug Semig

I actually analyzed it once.  I came to the conclusion that to do it right
it would be easier to make an almost entirely new db but use the same
external interfaces as PostgreSQL.  To do a kludge of it, one might just
implement a tier that sits between the user and a bunch of standard
PostgreSQL backends.

It'd make a neat companion project, though.  Like PG/Enterprise or
PG/Warehouse or something.

Doug

At 04:02 PM 11/30/00 -, Gordan Bobic wrote:
>> You're almost describing a Teradata DBM.
>
>I knew someone must have thought of it before. ;-)
>
>[snip]
>
>> The thing that impacted me the most about this architecture was that
>> sorting was practically built in.  So all the intermediary computers had
>to
>> do was merge the sorted result sets from its lower level computers.
>Blazing!
>
>They effectively implemented a binary tree in hardware. One hell of an
>indexing mechanism. :-)
>
>> I miss that old beast.  But I certainly cannot afford the multimillion
>> dollars required to get one for myself.
>
>I suppose it would depend on how many computers you want to have in this
>cluster. The main reason why clusters are getting popular recently (albeit
>not yet for databases, or so it would seem) is because it is cheaper than
>anything else with similar performance.
>
>The main question remains - are there any plans to implement something
>similar to this with PostgreSQL? I would volunteer to help with some
>coding, if a "group" was formed to work on this "clustering" module.
>
>Regards.
>
>Gordan





Re: [GENERAL] Help with Database Recovery

2000-11-30 Thread Tom Lane

"Hancock, David (DHANCOCK)" <[EMAIL PROTECTED]> writes:
> Sorry I didn't give more detail.  OS is Linux 2.2 kernel, PostgreSQL is
> 6.5.3.  The problem is that I copied the .../base/* directories elsewhere in
> preparation for making base a symlink to a different filesystem with more
> space.  I then screwed up and removed everything in /var/lib/pgsql, not just
> the base directories.  This necessitated a reinstall of PostgreSQL.

> I know, I know ... it was a very stupid maneuver on my part, but it's a
> strange feeling to know that I've GOT the database files, I just can't use
> 'em.  Yet.

Unfortunately, you've only got *part* of the database.  The above
maneuver destroyed your pg_log file, which is essential.  Without it,
you've got a lot of tuples but you don't know which ones are valid.

If you did a VACUUM just before all this, then there's a reasonable
chance that the tuples you have left are mostly just valid ones.
Otherwise I'd say it's hopeless.  In any case you will not be able
to reconstruct data that you can trust except after painstaking
manual examination.

How far back was your last regular whole-file-system backup?  Restoring
all of /var/lib/pgsql off that is likely to be your best shot at getting
to a state that's somewhat trustworthy.

regards, tom lane



Re: [GENERAL] Can PostGreSQL handle 100 user database - more info

2000-11-30 Thread Elmar Haneke



[EMAIL PROTECTED] wrote:
> 
> I am not sure what an ole db provider is? This must be another method of
> talking to the server from a client application. What advantages does it have?


If you intend to use ADO you need an OLE-DB provider.

> I have the open source ODBC client (and I know a Java version exists), it seems
> ok but I don't know if it handles things like transactions and other advanced
> functions.


The ODBC midht cause some "interesting" trouble since VisualBasic
tends to open multiple connections to the Server. This has tow
disadvantages:

1. While using transaction isolation (not reading uncommitted data)
you cannot read the data written on one connection over another one.
If this does happen you might not immediately notive the rubbish
happening.

2. With 100 users it might significant if there are 500 simultaneous
connections open. At leas you have to raise the connection-limit.
 
Elmar



RE: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Francis Solomon

Hi,

2Gb file *systems* have been supported forever and a day on Linux. ext2
supports this without batting an eyelid. 2Gb *files* have not been
supported very well or very long on 32-bit systems. Essentially you need
a recent 2.4.0-test kernel version (test7 and up) or a patched 2.2.x
kernel (more likely if you're in a production environment). For more
information, see http://www.suse.de/~aj/linux_lfs.html

2Gb memory is a limitation under x86 (ia32) Linux in current production
kernels (2.2.x).
Again, the new 2.4.0 kernels go one better by using Intel's PAE
(Physical Address Extension) mode on Pentium Pro CPUs and newer. This
raises the available memory on Linux to 64Gb. Of course, 2.4.0-testx
kernels are not production quality, but it's a good taste of what's
imminent.

Hope this helps.

Francis Solomon

>
> Note that this is a Linux limitation ... and even then, I'm
> not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs
> servers support
> up to 4 or 8GB of RAM ...




Re: [GENERAL] Unanswered questions about Postgre

2000-11-30 Thread Joel Burton



On 30 Nov 2000, at 11:58, Joe Kislo wrote:
> If you don't believe me, here's two fully SQL-92
> compliant databases, Oracle and interbase, which do not exhibit this
> behavior: 

Ummm... havings lots of experience w/it, I can say many things 
about Oracle, but "fully SQL-92 compliant" sure isn't one of them. :-)

--
Joel Burton, Director of Information Systems -*- [EMAIL PROTECTED]
Support Center of Washington (www.scw.org)



RE: [GENERAL] Help with Database Recovery

2000-11-30 Thread Hancock, David (DHANCOCK)

Tom and others:  Thanks for the guidance.  We rebuilt and restored, and will
just live with an earlier version of the data, sadder but wiser.  It was
good to (a) learn about pg_log and (b) realize that pg_dump and pg_dumpall
are our good friends and we should use them.

Today I also learned that starting a subject line with "Help" diverts a
message from going to the list directly.  I see why this is a good idea.

Again, thanks, all.

Cheers!
--
David Hancock | [EMAIL PROTECTED] | 410-266-4384


-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]]
Sent: Thursday, November 30, 2000 2:24 PM
To: Hancock, David (DHANCOCK)
Cc: '[EMAIL PROTECTED]'
Subject: Re: [GENERAL] Help with Database Recovery 


"Hancock, David (DHANCOCK)" <[EMAIL PROTECTED]> writes:
> Sorry I didn't give more detail.  OS is Linux 2.2 kernel, PostgreSQL is
> 6.5.3.  The problem is that I copied the .../base/* directories elsewhere
in
> preparation for making base a symlink to a different filesystem with more
> space.  I then screwed up and removed everything in /var/lib/pgsql, not
just
> the base directories.  This necessitated a reinstall of PostgreSQL.

> I know, I know ... it was a very stupid maneuver on my part, but it's a
> strange feeling to know that I've GOT the database files, I just can't use
> 'em.  Yet.

Unfortunately, you've only got *part* of the database.  The above
maneuver destroyed your pg_log file, which is essential.  Without it,
you've got a lot of tuples but you don't know which ones are valid.

If you did a VACUUM just before all this, then there's a reasonable
chance that the tuples you have left are mostly just valid ones.
Otherwise I'd say it's hopeless.  In any case you will not be able
to reconstruct data that you can trust except after painstaking
manual examination.

How far back was your last regular whole-file-system backup?  Restoring
all of /var/lib/pgsql off that is likely to be your best shot at getting
to a state that's somewhat trustworthy.

regards, tom lane



Re: [GENERAL] Can PostGreSQL handle 100 user database - more info

2000-11-30 Thread Adam Lang

But there is an OLE DB provider for ODBC, so you can use ADO with an ODBC;
just not as nice.

As for the multiple connections thing, I do not know anything about that.

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Elmar Haneke" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, November 30, 2000 1:59 PM
Subject: Re: [GENERAL] Can PostGreSQL handle 100 user database - more info


>
>
> [EMAIL PROTECTED] wrote:
> >
> > I am not sure what an ole db provider is? This must be another method of
> > talking to the server from a client application. What advantages does it
have?
>
>
> If you intend to use ADO you need an OLE-DB provider.
>
> > I have the open source ODBC client (and I know a Java version exists),
it seems
> > ok but I don't know if it handles things like transactions and other
advanced
> > functions.
>
>
> The ODBC midht cause some "interesting" trouble since VisualBasic
> tends to open multiple connections to the Server. This has tow
> disadvantages:
>
> 1. While using transaction isolation (not reading uncommitted data)
> you cannot read the data written on one connection over another one.
> If this does happen you might not immediately notive the rubbish
> happening.
>
> 2. With 100 users it might significant if there are 500 simultaneous
> connections open. At leas you have to raise the connection-limit.
>
> Elmar




Re: [GENERAL] Database cluster?

2000-11-30 Thread Alain Toussaint

> Somebody mentioned the fact that postgres uses IPC for communicating
> between processes. I think there are tools for clustering (I am not sure if
> Mosix supports transparently allowing IPC across nodes) which can work
> around that.

one of those tool is distributed ipc  but
it only work with Linux,AFAIK,the software there is just a patch to the
Linux kernel and a daemon.

Alain




Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Trond Eivind GlomsrØd

Marc SCHAEFER <[EMAIL PROTECTED]> writes:

> On Thu, 30 Nov 2000, The Hermit Hacker wrote:
> 
> > Note that this is a Linux limitation ... and even then, I'm not quite sure
> > how accurate that is anymore ... the *BSDs have supported >2gb file
> > systems for ages now, and, since IBM supports Linux, I'd be shocked if
> > there was a 2GB limit on memory, considering alot of IBMs servers support
> > up to 4 or 8GB of RAM ...
> 
> Linux 2.2.x on ix86 only supports files upto 2 GB. 

This support has been backported as is available in some kernels
shipped with Red Hat Linux, and has been so for some time. Possibly
others. 


-- 
Trond Eivind Glomsrød
Red Hat, Inc.



Re: [GENERAL] Database cluster?

2000-11-30 Thread Peter Korsgaard

On Thu, 30 Nov 2000, Doug Semig wrote:

> I actually analyzed it once.  I came to the conclusion that to do it right
> it would be easier to make an almost entirely new db but use the same
> external interfaces as PostgreSQL.  To do a kludge of it, one might just
> implement a tier that sits between the user and a bunch of standard
> PostgreSQL backends.
> 
> It'd make a neat companion project, though.  Like PG/Enterprise or
> PG/Warehouse or something.

I'm currently developing a simple version of such a system as an 
university project. It is a fairly simple aproach with a proxy or a
distributor in front of a bunch of standard postgresl database servers.

The proxy monitors and forwards the requests from the clients to the
database servers. If it is a read-only request the query is forwarded to
the databaseserver currently experiencing the lowest load/most free
memory, otherwise it is sent to all database servers.

This approach obviously only performs well in systems with a high ratio of
read-only queries, such as search engines and so on.

--
Bye, Peter Korsgaard




Re: [GENERAL] Can PostGreSQL handle 100 user database?

2000-11-30 Thread Mr. Shannon Aldinger

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 30 Nov 2000, The Hermit Hacker wrote:

>
> Note that this is a Linux limitation ... and even then, I'm not quite sure
> how accurate that is anymore ... the *BSDs have supported >2gb file
> systems for ages now, and, since IBM supports Linux, I'd be shocked if
> there was a 2GB limit on memory, considering alot of IBMs servers support
> up to 4 or 8GB of RAM ...
>
Linux kernel 2.2.x unpatched has the 2GB file size and 1GB ram limit.
Patched with the lfs package the 2GB file size limit goes away.
The lfs patch needs applied against gnu-libc as well. This alone may not
avoid the 2GB limit, the application must use the lseek64 instead of
lseek, for example. lfs will be included by default in the upcoming 2.4.x
kernels. The upcoming 2.4.x kernels also support more ram. I'm fairly
certain ram patches exist for the 2.2.x series.

I have just one question to ask will postgresql 7.1 include full support
for using lseek64, stat64, etc?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: Made with pgp4pine 1.75

iEYEARECAAYFAjomo5wACgkQwtU6L/A4vVD+bgCfZqBGG2C/FQceN4BkE9m474K1
mHcAoKEFHowB6iWhbOhPbOdDUxlN0eyA
=/AV7
-END PGP SIGNATURE-





Re: [GENERAL] Table & Column descriptions

2000-11-30 Thread Dale Anderson

That is exactly what I was looking for.  Thanks a lot.

Dale.

>>> "Joel Burton" <[EMAIL PROTECTED]> 11/30/00 01:29PM >>>
\d+  should show you the table schema with comments.
If you're looking for the actual data, it's in pg_description. The 
objoid field matches the oid field in pg_attribute (which is the 
"fields" table for pgsql).

On 30 Nov 2000, at 11:17, Dale Anderson wrote:

>I am able to add table and column descriptions, and I am also able
>to retrieve the table description.  The problem is that I can not
>find a way to retrieve the description comments on table
>columns  Any assistance would be greatly appreciated.
> 
> Dale.
> 


--
Joel Burton, Director of Information Systems -*- [EMAIL PROTECTED] 
Support Center of Washington (www.scw.org)




Re: [HACKERS] Re: [GENERAL] PHPBuilder article -- Postgres vs MySQL

2000-11-30 Thread Don Baccus

At 09:44 AM 11/21/00 -0700, Tim Uckun wrote:

>What about the php module? Does it take advantage of API?

I don't know.  If not, though, there wouldn't be much point in using
AOLserver, since the simple and efficient database API is the main
attraction.  So I think there's a pretty good chance it does.



- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



[GENERAL] Re: [HACKERS] Indexing for geographic objects?

2000-11-30 Thread selkovjr

Tom Lane wrote:
> Michael Ansley <[EMAIL PROTECTED]> writes:
> > Remember also that the GiST library has been integrated into PG, (my brother
> > is doing some thesis workon that at the moment),
> 
> Yeah?  Does it still work?

You bet. One would otherwise be hearing from me. I depend on it quite
heavily and I am checking with almost every release. I am now current
with 7.0.2 -- this time it required some change, although not in the c
code. And that's pretty amazing: I was only screwed once since
postgres95 -- by a beta version I don't remember now; then I
complained and the problem was fixed. I don't even know whom I owe
thanks for that.

> Since the GIST code is not tested by any standard regress test, and is
> so poorly documented that hardly anyone can be using it, 
I've always
> assumed that it is probably suffering from a severe case of bit-rot.
> 
> I'd love to see someone contribute documentation and regression test
> cases for it --- it's a great feature, if it works.

The bit rot fortunately did not happen, but the documentation I
promised Bruce many months ago is still "in the works" -- meaning,
something interfered and I haven't had a chance to start. Like a
friend of mine muses all the time, "Promise doesn't mean
marriage". Boy, do I feel guilty.

It's a bit better with the testing. I am not sure how to test the
GiST directly, but I have adapted the current version of regression
tests for the data types that depend on it. One can find them in my
contrib directory, under test/ (again, it's
http://wit.mcs.anl.gov/~selkovjr/pg_extensions/contrib.tgz)

It would be nice if at least one of the GiST types became a built-in
(that would provide for a more intensive testing), but I can also
think of the contrib code being (optionally) included into the main
build and regression test trees. The top-level makefile can have a
couple of special targets to build and test the contribs. I believe my
version of the tests can be a useful example to other contributors
whose code is already in the source tree.

--Gene



Re: [HACKERS] Re: [GENERAL] PHPBuilder article -- Postgres vs MySQL

2000-11-30 Thread Don Baccus

At 07:50 PM 11/30/00 -0600, GH wrote:
>On Thu, Nov 23, 2000 at 07:58:29AM -0800, some SMTP stream spewed forth: 
>> At 09:44 AM 11/21/00 -0700, Tim Uckun wrote:
>> 
>> >What about the php module? Does it take advantage of API?
>> 
>> I don't know.  If not, though, there wouldn't be much point in using
>> AOLserver, since the simple and efficient database API is the main
>> attraction.  So I think there's a pretty good chance it does.
>> 
>
>Through the course of another thread on the lists we have concluded that
>PHP does not support the AOLServer (or any other similar) database API.
>The "blockage" is that PHP includes its own database functions, albeit
>they are based on the Postgres, MySQL, etc. APIs individually. 
>
>I am considering looking into urging an integration of PHP and
>AOLServer's connection pooling (for lack of a better word) stuff.

Well, meanwhile I've gotten confirmation from folks in the PHP world 
(via an openacs forum) that it still isn't threadsafe, though there's
an effort underway to track down the problems.  I don't know how close
to solving this they are.



- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



[GENERAL] Re: [HACKERS] Indexing for geographic objects?

2000-11-30 Thread Hannu Krosing

Franck Martin wrote:
> 
> It seems that your code is exactly what I want.
> 
> I have already created geographical objects which contains MBR(Minimum
> Bounding Rectangle) in their structure, so it is a question of rewriting
> your code to change the access to the cube structure to the MBR structure
> inside my geoobject. (cf http://fmaps.sourceforge.net/) Look in the CVS for
> latest. I have been slack lately on the project, but I'm not forgetting it.
> 
> Quickly I ran through the code, and I think your cube is strictly speaking a
> box, which also a MBR.
> 
> However I didn't see the case of intersection, which is the main question
> when you want to display object that are visible inside a box.
> 
> I suppose your code is under GPL, and you have no problem for me to use it,
> providing I put your name and credits somewhere.

It would be much better if it were under the standard PostgreSQL license
and 
if it is included in the standard distribution. 

As Tom said, working Gist would be a great feature. 

Now if only someone would write the regression tests ;)

BTW, the regression tests for pl/pgsql seem to be somewhat sparse as
well, 
missing at least some types of loops, possibly more.

> Franck Martin
> Database Development Officer
> SOPAC South Pacific Applied Geoscience Commission
> Fiji
> E-mail: [EMAIL PROTECTED]
> Web site: http://www.sopac.org/
> 
> This e-mail is intended for its recipients only. Do not forward this e-mail
> without approval. The views expressed in this e-mail may not be necessarily
> the views of SOPAC.
> 
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, 25 November 2000 8:56
> To: Franck Martin
> Subject: Re: [HACKERS] Indexing for geographic objects?
> 
> It is probably possible to hook up an extension directly with the
> R-tree methods available in postgres -- if you stare at the code long
> enough and figure how to use the correct strategies. I chose an easier
> path years ago and I am still satisfied with the results. Check out
> the GiST -- a general access method built on top of R-tree to provide
> a user-friendly interface to it and to allow indexing of more abstract
> types, for which straight R-tree is not directly applicable.
> 
> I have a small set of complete data types, of which a couple
> illustrate the use of GiST indexing with the geometrical objects, in:
> 
> http://wit.mcs.anl.gov/~selkovjr/pg_extensions/
> 
> If you are using a pre-7.0 postrgres, grab the file contrib.tgz,
> otherwise take contrib-7.0.tgz. The difference is insignificant, but
> the pre-7.0 version will not fit the current schema. Unpack the source
> into postgresql-*/contrib and follow instructions in the README
> files. The types of interest for you will be seg and cube. You will
> find pointers to the original sources and docs in the CREDITS section
> of the README file. I also have a version of the original example code
> in pggist-patched.tgz, but I did not check if it works with current
> postgres. It should not be difficult to fix it if it doesn't -- the
> recent development in the optimizer area made certain things
> unnecessary.
> 
> You might want to check out a working example of the segment data type at:
> 
> http://wit.mcs.anl.gov/EMP/indexing.html
> 
> (search the page for 'KM')
> 
> I will be glad to help, but I would also recommend to send more
> sophisticated questions to Joe Hellerstein, the leader of the original
> postgres team that developed GiST. He was very helpful whenever I
> turned to him during the early stages of my data type project.
> 
> --Gene