On 10.07.2007 03:09, novnov wrote:
I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
would like to install 8.2, but it's not offered in the list. I think 8.2 is
offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably
the recommondation will be to com
Hello All,
I am trying tu run a script to create database from a batch programme and dont
want to supply password everytime.
So i tried to setup pgpass.conf file.
File is kept in user profile/application data
i.e
C:\Documents and Settings\postgres\Application Data\postgresql\pgpass.conf
file co
Hi,
I am not moving from 6.10 to anything else for now.
Ubuntu 6.10 LTS is Long Term Support. So for a server that's what I want:
everyhting working better and better (via updates) and no major changes !
Getting always the latest version is definitely asking for troubles.
I don't need the latest
On 7/9/07, Zlatko Matic <[EMAIL PROTECTED]> wrote:
Does plpgsql has something equivalent to plperl $_SHARED or plpythonu
global
dictionary GD?
no, but you can use some table to emulate this. or a temp table.
depesz
--
http://www.depesz.com/ - nowy, lepszy depesz
On 10 Jul 2007 at 9:13, Hannes Dorbath wrote:
On 10.07.2007 03:09, novnov wrote:
> I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
> would like to install 8.2, but it's not offered in the list. I think 8.2 is
> offered on 7.x ubuntu, and I wonder if 8.2 will be offered
On Tue, Jul 10, 2007 at 01:13:07AM -0700, Laurent ROCHE wrote:
> Hi,
>
> I am not moving from 6.10 to anything else for now.
> Ubuntu 6.10 LTS is Long Term Support. So for a server that's what I want:
> everyhting working better and better (via updates) and no major changes !
> Getting always th
Le mardi 10 juillet 2007, novnov a écrit :
> I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
> would like to install 8.2, but it's not offered in the list. I think 8.2 is
> offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10?
> Probably the recommondation
On 10/07/2007 08:47, Ashish Karalkar wrote:
Still the batch asks for the password.!!!
I am just not getting why its not reading password from pgpass file.
Probably a silly question, but if you're using the createdb utility in
the batch file, have you inadvertently included the -W option? - t
Le lundi 09 juillet 2007, Gregory Stark a écrit :0
> The output of vacuum verbose can be hard to interpret, if you want help
> adjusting the fsm settings send it here.
Using pgfouine, one gets easy to read reports:
http://pgfouine.projects.postgresql.org/vacuum.html
http://pgfouine.projects.po
- Original Message -
From: "Raymond O'Donnell" <[EMAIL PROTECTED]>
To: "Ashish Karalkar" <[EMAIL PROTECTED]>
Cc:
Sent: Tuesday, July 10, 2007 3:51 PM
Subject: Re: [GENERAL] pgpass.conf
On 10/07/2007 08:47, Ashish Karalkar wrote:
Still the batch asks for the password.!!!
I am just
On 10/07/2007 11:28, Ashish Karalkar wrote:
I have set this succesfully on redhat linux but iam messed up in Windows
XP prof.
Is there any other thing to do?
I'm not a guru, but maybe it's a permissions problem on the pgpass file?
Ray.
-
Ashish Karalkar wrote:
> Hello All,
>
> I am trying tu run a script to create database from a batch programme
> and dont want to supply password everytime.
> So i tried to setup pgpass.conf file.
> File is kept in user profile/application data
> i.e
> C:\Documents and Settings\postgres\Applicatio
- Original Message -
From: "Dave Page" <[EMAIL PROTECTED]>
To: "Ashish Karalkar" <[EMAIL PROTECTED]>
Cc:
Sent: Tuesday, July 10, 2007 4:25 PM
Subject: Re: [GENERAL] pgpass.conf
Ashish Karalkar wrote:
Hello All,
I am trying tu run a script to create database from a batch programme
On Tue, Jul 10, 2007 at 04:34:56PM +0530, Ashish Karalkar wrote:
> >>Hello All,
> >>
> >>I am trying tu run a script to create database from a batch programme
> >>and dont want to supply password everytime.
> >>So i tried to setup pgpass.conf file.
> >>File is kept in user profile/application data
Ashish Karalkar wrote:
> The batch file is run under postgres user, also owner of the pgpass.conf
> file is postgres.
> As far as my knowledge the permission checking is not done on windows
> anyways the owner is same so i dont think there is any problem of
> permission
>
OK - have you tried 127.
On Jul 7, 2007, at 8:16 AM, Carmen Martinez wrote:
Please, I need to know where the catalog tables (pg_class,
pg_attrdef...) are located in the postgresql rdbms. Because I can
not see them in the pgAdminII interface, like other tables or
objects. And I can not find any reference about wher
Hello,
System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)
Initial problem:
# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR: invalid memory
alloc request size 9000688640
After some research, it seems to be related t
On Tue, Jul 10, 2007 at 08:40:24AM +0400, alexander lunyov wrote:
>> Just to clarify: lower() on both sides of a comparison
>> should still work as expected on multibyte encodings ? It's
>> been suggested here before.
>
> lower() on both sides also does not working in my case, it still search for
Hello.
OK. I created a new table that holds information about rows inserted/updated in
a transaction.
I realized that after row-level trigger fires always before after
statement-level trigger.
Therefore I can use row-level triger to populate the auxiliary table which
holds information about affe
Hello.
Is there any free program/utility for batch imports from .csv files, that
can be easily scheduled for daily inserts of data to PostgreSQL tables?
Regards,
Zlatko
---(end of broadcast)---
TIP 5: don't forget to increase your free space m
Karsten Hilbert wrote:
Just to clarify: lower() on both sides of a comparison
should still work as expected on multibyte encodings ? It's
been suggested here before.
lower() on both sides also does not working in my case, it still search for
case-sensitive data. String in this example have first
On Tue, 2007-07-10 at 14:32 +0200, Zlatko Matic wrote:
> Hello.
> Is there any free program/utility for batch imports from .csv files, that
> can be easily scheduled for daily inserts of data to PostgreSQL tables?
> Regards,
>
> Zlatko
>
>
> ---(end of broadcast)--
Le mardi 10 juillet 2007, Zlatko Matic a écrit :
> Is there any free program/utility for batch imports from .csv files, that
> can be easily scheduled for daily inserts of data to PostgreSQL tables?
COPY itself would do the job, but you can also use pgloader:
http://pgfoundry.org/projects/pgload
am Tue, dem 10.07.2007, um 14:32:58 +0200 mailte Zlatko Matic folgendes:
> Hello.
> Is there any free program/utility for batch imports from .csv files, that
> can be easily scheduled for daily inserts of data to PostgreSQL tables?
> Regards,
You can use the scheduler from your OS. For Unix-like
Hi all,
I want to use postgres to store data and large files, typically audio
files from 100ko to 20Mo. For those files, I just need to store et
retrieve them, in an ACID way. (I don't need search, or substring, or
others functionnalities)
I saw postgres offers at least 2 method : bytea column
On 7/9/07, novnov <[EMAIL PROTECTED]> wrote:
I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I
would like to install 8.2, but it's not offered in the list. I think 8.2 is
offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably
the recommondation wil
Alain Peyrat wrote:
Hello,
System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)
Initial problem:
# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR: invalid memory alloc
request size 9000688640
After some research, it see
On 7/10/07, Benoit Mathieu <[EMAIL PROTECTED]> wrote:
I saw postgres offers at least 2 method : bytea column with TOAST, or
large objects API.
From the documentation:
All large objects are placed in a single system table called pg_largeobject.
PostgreSQL also supports a storage system calle
alexander lunyov <[EMAIL PROTECTED]> writes:
> With this i just wanted to say that lower() doesn't work at all on
> russian unicode characters,
In that case you're using the wrong locale (ie, not russian unicode).
Check "show lc_ctype".
Or [ checks back in thread... ] maybe you're using the w
Version 7.4.12
AIX 5.3
Scenario - a large table was not being vacuumed correctly, there now ~
15 million dead tuples that account for approximately 20%-25% of the
table. Vacuum appears to be stalling - ran for approximately 10 hours
before I killed it. I hooked up to the process with gdb and thi
On Tue, 10 Jul 2007, Alexander Staubo wrote:
> My take: Stick with TOAST unless you need fast random access. TOAST
> is faster, more consistently supported (eg., in Slony) and easier
> to work with.
Toasted bytea columns have some other disadvantages also:
1.
It is impossible to create its valu
Hello,
I am new in using c in postgresql. My problem is that when i compile my program
code,it generate the folowing error mensage:
fu01.o(.idata$3+0xc): undefined reference to
`libpostgres_a_iname'nmth00.o(.idata$4+0x0): undefined reference to
`_nm__SPI_processed'collect2: ld returne
Brad Nicholson <[EMAIL PROTECTED]> writes:
> Scenario - a large table was not being vacuumed correctly, there now ~
> 15 million dead tuples that account for approximately 20%-25% of the
> table. Vacuum appears to be stalling - ran for approximately 10 hours
> before I killed it. I hooked up to t
On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote:
> Oh, I forgot to mention --- you did check that vacuum_mem is set to
> a pretty high value, no? Else you might be doing a lot more
> btbulkdelete scans than you need to.
>
> regards, tom lane
What would you define as high
Oh, I forgot to mention --- you did check that vacuum_mem is set to
a pretty high value, no? Else you might be doing a lot more
btbulkdelete scans than you need to.
regards, tom lane
---(end of broadcast)---
TIP 4: Have you
Brad Nicholson <[EMAIL PROTECTED]> writes:
> On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote:
>> Oh, I forgot to mention --- you did check that vacuum_mem is set to
>> a pretty high value, no? Else you might be doing a lot more
>> btbulkdelete scans than you need to.
> What would you define as
On Tue, 2007-07-10 at 11:31 -0400, Tom Lane wrote:
> Brad Nicholson <[EMAIL PROTECTED]> writes:
> > On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote:
> >> Oh, I forgot to mention --- you did check that vacuum_mem is set to
> >> a pretty high value, no? Else you might be doing a lot more
> >> btbu
Brad Nicholson <[EMAIL PROTECTED]> writes:
> On Tue, 2007-07-10 at 11:31 -0400, Tom Lane wrote:
>> How big is this index again?
> Not sure which one it's working on - there are 6 of them each are ~
> 2.5GB
OK, about 300K pages each ... so even assuming the worst case that
each page requires a phy
Hello
I have similar problem with vacuum on 8.1
I have 256M table. pgstattuple reports 128M free. I stopped vacuum
after 1hour (maintenance_work_mem = 160M). I had not more time.
Regards
Pavel Stehule
2007/7/10, Tom Lane <[EMAIL PROTECTED]>:
Brad Nicholson <[EMAIL PROTECTED]> writes:
> On T
On Sat, Jul 07, 2007 at 05:16:56AM -0700, Gabriele wrote:
> Let's have a server which feed data to multiple slaves, usually using
> direct online connections. Now, we may want to allow those client to
> sync the data to a local replica, work offline and then resync the
> data back to the server. Wh
Hello
I have similar problem with vacuum on 8.1
I have 256M table. pgstattuple reports 128M free. I stopped vacuum
after 1hour (maintenance_work_mem = 160M). I had not more time.
I test it on 8.3 with random data. Vacuum from 190M to 94M neded
30sec. It's much better. It isn't 100% comparable
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alvaro Herrera
Sent: Friday, July 06, 2007 9:49 AM
To: Nykolyn, Andrew
Cc: John DeSoi; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Nested Transactions in PL/pgSQL
Nykolyn, Andrew wrote:
> My real issu
Richard Huxton <[EMAIL PROTECTED]> writes:
> Alain Peyrat wrote:
>> Initial problem:
>>
>> # pg_dump -O dbname -Ft -f /tmp/database.tar
>> pg_dump: query to get table columns failed: ERROR: invalid memory alloc
>> request size 9000688640
>>
>> After some research, it seems to be related to a co
Thanks all of you. It does seem like the backport is the way to go.
So now I have 8.2 and some new postgres/linux newb questions.
I can safely remove 8.1 after moving data using synaptic, ie 8.2 shouldn't
be dependent on 8.1 at all?
I don't understand how postgres is installed with these packa
On 07.07.2007, at 06:16, Gabriele wrote:
Let's have a server which feed data to multiple slaves, usually using
direct online connections. Now, we may want to allow those client to
sync the data to a local replica, work offline and then resync the
data back to the server. Which is the easiest way
Tom Lane wrote:
Richard Huxton <[EMAIL PROTECTED]> writes:
Alain Peyrat wrote:
Initial problem:
# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR: invalid memory alloc
request size 9000688640
After some research, it seems to be related to a corr
my primary key is neither SERIAL nor a SEQUENCE.
CONSTRAINT pk_dig PRIMARY KEY (dig_id)
This is the clause that I have for my primary key in the create table
script.
thanks,
~Harpreet
On 7/10/07, Ron St-Pierre <[EMAIL PROTECTED]> wrote:
Harpreet Dhaliwal wrote:
> Hi,
>
> I keep getting this
On Saturday 07 July 2007 11.34:04 Евгений Кононов wrote:
> Hello !
>
> How to force POSTGRES to use all virtual processors at included
> Hyper-Trading ?
If your operating system is able to schedule the threads/processes across
CPUs, PostgreSQL will use them. Often, the limit is disk, not CPU, s
On Saturday 07 July 2007 14.16:56 Gabriele wrote:
> I know this is a delicate topic which must be approached cautiously.
>
> Let's have a server which feed data to multiple slaves, usually using
> direct online connections. Now, we may want to allow those client to
> sync the data to a local replic
Richard Huxton <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> FWIW, a look in the source code shows that the 'corrupted item pointer'
>> message comes only from PageIndexTupleDelete, so that indicates a
>> damaged index which should be fixable by reindexing.
> Tom - could it be damage to a share
On Tue, Jul 10, 2007 at 10:50:39AM -0700, novnov wrote:
>
> Thanks all of you. It does seem like the backport is the way to go.
>
> So now I have 8.2 and some new postgres/linux newb questions.
>
> I can safely remove 8.1 after moving data using synaptic, ie 8.2 shouldn't
> be dependent on 8.1
I lately figured out the actual problem PHEW.
Its something like two different transactions are seeing the same snapshot
of the database.
Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it was still 30
and by the
On Jul 10, 2007, at 13:22 , Harpreet Dhaliwal wrote:
Transaction 1 started, saw max(dig_id) = 30 and inserted new
dig_id=31.
Now the time when Transaction 2 started and read max(dig_id) it was
still 30
and by the time it tried to insert 31, 31 was already inserted by
Transaction 1 and hence
Does Postgres have any native support for hierarchical data storage?
I'm familiar with the Adjacency List technique, but am trying to
determine whether or not Nested Sets would make sense for our
application or not. I understand that Nested Sets might be better
for high read applications,
Matthew Hixson wrote:
Does Postgres have any native support for hierarchical data storage?
I'm familiar with the Adjacency List technique, but am trying to
determine whether or not Nested Sets would make sense for our
application or not. I understand that Nested Sets might be better for
high
"Harpreet Dhaliwal" <[EMAIL PROTECTED]> writes:
> Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31.
> Now the time when Transaction 2 started and read max(dig_id) it was still 30
> and by the time it tried to insert 31, 31 was already inserted by
> Transaction 1 and hence the
On Jul 10, 2007, at 13:51 , Richard Huxton wrote:
Matthew Hixson wrote:
Does Postgres have any native support for hierarchical data
storage? I'm familiar with the Adjacency List technique, but am
trying to determine whether or not Nested Sets would make sense
for our application or not.
Thanks alot for all your suggestions gentlemen.
I changed it to a SERIAL column and all the pain has been automatically
alleviated :)
Thanks a ton.
~Harpreet
On 7/10/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Harpreet Dhaliwal" <[EMAIL PROTECTED]> writes:
> Transaction 1 started, saw max(dig_id)
Richard Huxton a écrit :
Alain wrote:
Hello,
System: Red Hat Linux 4 64bits running postgres-7.4.16 (production)
Initial problem:
# pg_dump -O dbname -Ft -f /tmp/database.tar
pg_dump: query to get table columns failed: ERROR: invalid memory
alloc request size 9000688640
After so
Hi,
How can I have two different transactions is a plperlu function?
My purpose is as follows:-
Transaction 1 does some series of inserts in tbl_abc
Transaction 2 updates some columns in tbl_abc fetching records from some
other table.
I basically want 2 independent transactions in my function s
So, I'm working on a script that does PITR and basing it off the one here:
http://archives.postgresql.org/pgsql-admin/2006-03/msg00337.php
(BTW, thanks for posting that, Rajesh.)
My frustration comes from the output format of pg_stop_backup().
Specifically, it outputs a string like this:
550
Jasbinder Singh Bali wrote:
Hi,
How can I have two different transactions is a plperlu function?
My purpose is as follows:-
Transaction 1 does some series of inserts in tbl_abc
Transaction 2 updates some columns in tbl_abc fetching records from some
other table.
You'll have to connect back to
On Jul 10, 2007, at 14:41 , Jasbinder Singh Bali wrote:
I basically want 2 independent transactions in my function so that
1 commits as soon as it is done and 2 doesn't
depend on it at all.
If they're truly independent, I'd write them as two separate
functions., possibly calling both of t
You mean to say keep using spi_exec till I want everything in the same
transaction and the point where I want a separate transaction, use DBI ?
On 7/10/07, Richard Huxton <[EMAIL PROTECTED]> wrote:
Jasbinder Singh Bali wrote:
> Hi,
>
> How can I have two different transactions is a plperlu func
On 7/10/07, Евгений Кононов <[EMAIL PROTECTED]> wrote:
Здравствуйте, Andrej.
Privet ;) ... not that I speak any Russian, really.
ARB> What OS are you using, and what's hyper-trading? Hyper threading
ARB> by any chance? That's the OSes responsibility, not the databases.
I'm use Fedora Core
Jasbinder Singh Bali wrote:
You mean to say keep using spi_exec till I want everything in the same
transaction and the point where I want a separate transaction, use DBI ?
Yes - if you have two functions A,B then do everything as normal in
each, except you call function B using dblink() from f
Hi, All,
I'm working on a GIS project using PostgreSQL and PostGIS. In the project I
need to find locations of about 12K addresses (the process is referred to as
geocoding). I wrote some script to perform this task by calling a procedure
"tiger_geocoding" that is provided by PostGIS. My script
Ben wrote:
So, I'm working on a script that does PITR and basing it off the one here:
http://archives.postgresql.org/pgsql-admin/2006-03/msg00337.php
(BTW, thanks for posting that, Rajesh.)
My frustration comes from the output format of pg_stop_backup().
Specifically, it outputs a string like
AlJeux wrote:
Richard Huxton a écrit :
1. Have you had crashes or other hardware problems recently?
No crash but we changed our server (<= seems the cause).
First try was using a file system copy to reduce downtime as it was two
same 7.4.x version but the result was not working (maybe relate
On Tue, 10 Jul 2007, Richard Huxton wrote:
Have you looked in the "backup history file":
http://www.postgresql.org/docs/8.2/static/continuous-archiving.html#BACKUP-BASE-BACKUP
"The backup history file is just a small text file. It contains the label
string you gave to pg_start_backup, as well
On Tue, Jul 10, 2007 at 08:09:11PM +0200, Adrian von Bidder wrote:
> If your operating system is able to schedule the threads/processes across
> CPUs, PostgreSQL will use them.
But notice that hyperthreading imposes its own overhead. I've not
seen evidence that enabling hyperthreading actuall
On Tue, 10 Jul 2007, Ben wrote:
"The backup history file is just a small text file. It contains the label
string you gave to pg_start_backup, as well as the starting and ending
times and WAL segments of the backup.
For instance, in the case when the backup history file from the previous
back
On 7/11/07, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
On Tue, Jul 10, 2007 at 08:09:11PM +0200, Adrian von Bidder wrote:
> If your operating system is able to schedule the threads/processes across
> CPUs, PostgreSQL will use them.
But notice that hyperthreading imposes its own overhead. I've
Ah, perfect, that's what I was looking for. Thanks!
On Tue, 10 Jul 2007, Greg Smith wrote:
On Tue, 10 Jul 2007, Ben wrote:
"The backup history file is just a small text file. It contains the label
string you gave to pg_start_backup, as well as the starting and ending
times and WAL segments o
"Andrej Ricnik-Bay" <[EMAIL PROTECTED]> writes:
> On 7/11/07, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
>> But notice that hyperthreading imposes its own overhead. I've not
>> seen evidence that enabling hyperthreading actually helps, although I
>> may have overlooked a couple of cases.
> Have a
Richard Huxton <[EMAIL PROTECTED]> writes:
>> First try was using a file system copy to reduce downtime as it was two
>> same 7.4.x version but the result was not working (maybe related to
>> architecture change 32bits => 64 bits) so I finally dropped the db and
>> performed an dump/restore. I t
"Shuo Liu" <[EMAIL PROTECTED]> writes:
> The log shows the following message:
> CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
> ExecutorState: 122880 total in 4 blocks; 1912 free (9 chunks); 120968 used
> ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
>
On 7/11/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Conventional wisdom around here has been that HT doesn't help database
performance, and that IBM link might provide a hint as to why: the
only item for which they show a large loss in performance is disk I/O.
Ooops.
Thanks Tom, great summary. How
Hi, Tom,
Thanks for the reply. I'll try to provide as much information as I can.
> ExecutorState: 122880 total in 4 blocks; 1912 free (9 chunks); 120968 used
> ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
> ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
Hi, Tom,
One more question. I'm new to PostgreSQL and not an expert in debugging. After
checking the manual, I think I need to turn on the following parameters in
order to generate debug info. Do you think doing so would give us what we need
to pinpoint the problem?
debug_assertions
trace_not
"Shuo Liu" <[EMAIL PROTECTED]> writes:
> TopMemoryContext: 11550912 total in 1377 blocks; 123560 free (833 chunks);
> 11427352 used
Whoa ... that is a whole lot more data than I'm used to seeing in
TopMemoryContext. How many stats dump lines are there exactly (from
here to the crash report)? If
"Shuo Liu" <[EMAIL PROTECTED]> writes:
> One more question. I'm new to PostgreSQL and not an expert in debugging.
> After checking the manual, I think I need to turn on the following parameters
> in order to generate debug info. Do you think doing so would give us what we
> need to pinpoint the
> Whoa ... that is a whole lot more data than I'm used to seeing in
TopMemoryContext. How many stats dump lines are there exactly (from
here to the crash report)?
OK, I didn't know that was a surprise. There are about 600 stats dump lines
in between.
>> The spatial database that the script is
> OK, so maybe it's dependent on the size of the table. Try generating a
test case by loading up just your schema + functions + a lot of dummy
entries generated by script.
> Is the data proprietary? If not, maybe you could arrange to send me a
dump off-list. A short test-case script would be bette
"Shuo Liu" <[EMAIL PROTECTED]> writes:
> That's what I was planning to do. I'll generate a table with dummy entries. I
> think we may try to use the smaller base table first. Once I can reproduce
> the problem I'll dump the database into a file and send it to you. Is there a
> server that I can
85 matches
Mail list logo