[PERFORM] increase performancr with "noatime"?

2003-09-09 Thread JM
would it cause problem in postgres DB if /var/lib/psql partition is mounted 
with "noatime"?

TIA 
JM

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] increase performancr with "noatime"?

2003-09-09 Thread Andrew Sullivan
On Tue, Sep 09, 2003 at 04:12:48PM +0800, JM wrote:
> would it cause problem in postgres DB if /var/lib/psql partition is mounted 
> with "noatime"?

No; in fact, that's been suggested by many people.  I don't know
whether anyone's done any tests to prove that it helps.

A

-- 

Andrew Sullivan 204-4141 Yonge Street
Liberty RMS   Toronto, Ontario Canada
<[EMAIL PROTECTED]>  M2P 2A8
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[PERFORM] Need advice about triggers

2003-09-09 Thread Mindaugas Riauba

  Hello,

  I have small table (up to 1 rows) and every row will be updated
once per minute. Table also has "before update on each row" trigger
written in plpgsql. But trigger 99.99% of the time will do nothing
to the database. It will just compare old and new values in the row
and those values almost always will be identical.

  Now I tried simple test and was able to do 1 updates on 1000
rows table in ~30s. That's practically enough but I'd like to have
more room to slow down.
  Also best result I achieved by doing commit+vacuum every ~500
updates.

  How can I improve performance and will version 7.4 bring something
valuable for my task? Rewrite to some other scripting language is not
a problem. Trigger is simple enough.

  Postgres v7.3.4, shared_buffers=4096 max_fsm settings also bumped up
10 times.

  Thanks,

  Mindaugas


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] increase performancr with "noatime"?

2003-09-09 Thread Bill Moran
Andrew Sullivan wrote:
On Tue, Sep 09, 2003 at 04:12:48PM +0800, JM wrote:

would it cause problem in postgres DB if /var/lib/psql partition is mounted 
with "noatime"?


No; in fact, that's been suggested by many people.  I don't know
whether anyone's done any tests to prove that it helps.
http://www.potentialtech.com/wmoran/postgresql.php#results

You can see, from my _limited_ testing, that it doesn't seem to help enough
to be worth worrying about.  In this test, it actually seems to hurt
performance.  Read the whole page, though.  These tests are heavy on the
writing, it's quite possible that it could improve things if your database
is a heavy read scenerio.
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Need advice about triggers

2003-09-09 Thread Richard Huxton
On Tuesday 09 September 2003 13:40, Mindaugas Riauba wrote:
>   Hello,
>
>   I have small table (up to 1 rows) and every row will be updated
> once per minute. Table also has "before update on each row" trigger
> written in plpgsql. But trigger 99.99% of the time will do nothing
> to the database. It will just compare old and new values in the row
> and those values almost always will be identical.
>
>   Now I tried simple test and was able to do 1 updates on 1000
> rows table in ~30s. That's practically enough but I'd like to have
> more room to slow down.
>   Also best result I achieved by doing commit+vacuum every ~500
> updates.
>
>   How can I improve performance and will version 7.4 bring something
> valuable for my task? Rewrite to some other scripting language is not
> a problem. Trigger is simple enough.

Well, try it without the trigger. If performance improves markedly, it might 
be worth rewriting in C.

If not, you're probably saturating the disk I/O - using iostat/vmstat will let 
you see what's happening. If it is your disks, you might see if moving the 
WAL onto a separate drive would help, or check the archives for plenty of 
discussion about raid setups.

>   Postgres v7.3.4, shared_buffers=4096 max_fsm settings also bumped up
> 10 times.

Well effective_cache_size is useful for reads, but won't help with writing. 
You might want to look at wal_buffers and see if increasing that helps, but I 
couldn't say for sure.
-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Need advice about triggers

2003-09-09 Thread Rod Taylor
>   How can I improve performance and will version 7.4 bring something
> valuable for my task? Rewrite to some other scripting language is not
> a problem. Trigger is simple enough.

Your best bet is to have additional clients connected to the database
requesting work. Approx NUMCPUs * 2 + 1 seems to be ideal. (+1 to ensure
there is something waiting when the others complete.  *2 to ensure that
you can have 50% reading from disk, 50% doing calculations)

You may simply want to put vacuum into a loop of it's own so it executes
~1 second after the previous run finished.  Work should still be going
on even though vacuum is running.


signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] Need advice about triggers

2003-09-09 Thread Mindaugas Riauba

> >   How can I improve performance and will version 7.4 bring something
> > valuable for my task? Rewrite to some other scripting language is not
> > a problem. Trigger is simple enough.
>
> Well, try it without the trigger. If performance improves markedly, it
might
> be worth rewriting in C.

  Nope. Execution time is practically the same without trigger.

> If not, you're probably saturating the disk I/O - using iostat/vmstat will
let
> you see what's happening. If it is your disks, you might see if moving the
> WAL onto a separate drive would help, or check the archives for plenty of
> discussion about raid setups.

  Bottleneck in this case is CPU. postmaster process uses almost 100% of
CPU.

> >   Postgres v7.3.4, shared_buffers=4096 max_fsm settings also bumped up
> > 10 times.
> Well effective_cache_size is useful for reads, but won't help with
writing.
> You might want to look at wal_buffers and see if increasing that helps,
but I
> couldn't say for sure.

  Disk I/O should not be a problem in this case. vmstat shows ~300kb/s write
activity.

  Mindaugas


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Pailloncy Jean-Gérard
I did not expect so many answers about this question.
Thanks.
I find by myself the "order by trick" to speed min/max function.

Jean-Gérard Pailloncy

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [PERFORM] Need advice about triggers

2003-09-09 Thread scott.marlowe
On Tue, 9 Sep 2003, Mindaugas Riauba wrote:

> 
>   Hello,
> 
>   I have small table (up to 1 rows) and every row will be updated
> once per minute. Table also has "before update on each row" trigger
> written in plpgsql. But trigger 99.99% of the time will do nothing
> to the database. It will just compare old and new values in the row
> and those values almost always will be identical.

If the rows aren't going to actually change all that often, perhaps you 
could program your trigger to just silently drop the update, i.e. only 
change the rows that need updating and ignore the rest?  That should speed 
things up.  Unless I'm misunderstanding your needs here.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Need advice about triggers

2003-09-09 Thread Richard Huxton
On Tuesday 09 September 2003 14:33, Mindaugas Riauba wrote:
> > Well, try it without the trigger. If performance improves markedly, it
> might
> > be worth rewriting in C.
>
>   Nope. Execution time is practically the same without trigger.

OK - no point in rewriting it then.

> > If not, you're probably saturating the disk I/O - using iostat/vmstat
> > will
>
> let
>
> > you see what's happening. If it is your disks, you might see if moving
> > the WAL onto a separate drive would help, or check the archives for
> > plenty of discussion about raid setups.
>
>   Bottleneck in this case is CPU. postmaster process uses almost 100% of
> CPU.

>   Disk I/O should not be a problem in this case. vmstat shows ~300kb/s
> write activity.

Hmm - I must admit I wasn't expecting that. Closest I can get on my test 
machine here: AMD 400MHz / 256MB / IDE disk / other stuff running is about 20 
secs.

I've attached the perl script I used - what sort of timings does it give you?

-- 
  Richard Huxton
  Archonet Ltd#!/usr/bin/perl -w
use DBI;
use Benchmark;

# How many clients, how many inserts each
my $num_upd = 1;
my $num_rows = 1000;

my $dbname = "richardh";
my $dbh = DBI->connect("dbi:Pg:dbname=$dbname", "", "");

# Comment out create_table after the first run.
create_table($dbh, $num_rows);
my $t0 = new Benchmark;
run_updates($dbh, $num_rows, $num_upd);
my $t1 = new Benchmark;
$td = timediff($t1, $t0);
print "the code took:",timestr($td),"\n";

$dbh->disconnect;
exit;


sub create_table {
	my $dbh = shift;
	my $num_rows = shift;

	my $sth = $dbh->prepare("CREATE TABLE foo (a int4, v varchar(100), t timestamp with time zone, PRIMARY KEY (a))");
	$sth->execute or die $dbh->errstr;

	$sth = $dbh->prepare("INSERT INTO foo (a,v,t) VALUES (?,?,now())");
	my $vartext = '';
	for (my $i=0; $i<$num_rows; $i++) {
		$sth->execute($i,$vartext) or die $dbh->errstr;
		$vartext++;
	}	
}


sub run_updates {
	my $dbh = shift;
	my $num_rows = shift;
	my $num_upd  = shift;
	my $batchsize = 500;

	my $sth = $dbh->prepare("UPDATE foo SET t=now() WHERE a=?");
	for (my $i=0; $i<$num_upd; $i++) {
		if ($i % $batchsize==0) { $dbh->begin_work; }
		if ($i % $batchsize==499) {
			$dbh->commit;
			print "update $i\n";
			$dbh->do("VACUUM foo;");
		}
		$sth->execute() or die $dbh->errstr;
	}
}

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Need advice about triggers

2003-09-09 Thread Tom Lane
"Mindaugas Riauba" <[EMAIL PROTECTED]> writes:
>> Well, try it without the trigger. If performance improves markedly, it
>> might be worth rewriting in C.

>   Nope. Execution time is practically the same without trigger.

>> If not, you're probably saturating the disk I/O -

>   Bottleneck in this case is CPU. postmaster process uses almost 100% of
> CPU.

That seems very odd.  Updates should be I/O intensive, not CPU
intensive.  I wouldn't have been surprised to hear of a plpgsql trigger
consuming lots of CPU, but without it, I'm not sure where the time is
going.  Can you show us an EXPLAIN ANALYZE result for a typical update
command?

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Need advice about triggers

2003-09-09 Thread Hannu Krosing
Mindaugas Riauba kirjutas T, 09.09.2003 kell 15:40:
>   Hello,
> 
>   I have small table (up to 1 rows) and every row will be updated
> once per minute. Table also has "before update on each row" trigger
> written in plpgsql. But trigger 99.99% of the time will do nothing
> to the database. It will just compare old and new values in the row
> and those values almost always will be identical.
> 
>   Now I tried simple test and was able to do 1 updates on 1000
> rows table in ~30s. That's practically enough but I'd like to have
> more room to slow down.

Is it 1 *rows* or 1*1000 = 10 000 000 *rows* updated ?

When I run a simple update 10 times on 1000 rows (with no trigger, which
you claim to take about the same time) it took 0.25 sec.

> Also best result I achieved by doing commit+vacuum every ~500
> updates.

It seems like you are updating more than one row at each update ?

---
Hannu


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] increase performancr with "noatime"?

2003-09-09 Thread Vivek Khera
> "AS" == Andrew Sullivan <[EMAIL PROTECTED]> writes:

AS> On Tue, Sep 09, 2003 at 04:12:48PM +0800, JM wrote:
>> would it cause problem in postgres DB if /var/lib/psql partition is mounted 
>> with "noatime"?

AS> No; in fact, that's been suggested by many people.  I don't know
AS> whether anyone's done any tests to prove that it helps.

I honestly can't expect it to be much of an improvement since the
number of files involved compared with the size of the files is
minimal.  However, if you're opening/closing the files often it might
cause you problems.  I think in the normal case where it does matter
you have pooled connections so the open/close happens rarely.

Of course, if I had a good synthetic workload to pound on my DB, I'd
run a test... Sean?

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Greg Stark

The only connection to MVCC is that the "obvious" solution doesn't work,
namely storing a cache of the aggregate in the table information. 

So what would it take to implement this for "all" aggregates? Where I think
"all" really just means min(), max(), first(), last().

I think it would mean having a way to declare when defining an aggregate that
only specific records are necessary. For first() and last() it would only have
to indicate in some way that only the first or last record of the grouping was
necessary in the pre-existing order.

For min() and max() it would have to indicate not only that only the first or
last record is necessary but also the sort order to impose. 

Then if the optimizer determines that all the aggregates used either impose no
sort order or impose compatible sort orders, then it should insert an extra
sort step before the grouping, and flag the executor to indicate it should do
DISTINCT ON type behaviour to skip unneeded records.

Now the problem I see is if there's no index on the sort order imposed, and
the previous step wasn't a merge join or something else that would return the
records in order then it's not necessarily any faster to sort the records and
return only some. It might be for small numbers of records, but it might be
faster to just read them all in and check each one for min/max the linear way.

-- 
greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Josh Berkus
Greg,

> The only connection to MVCC is that the "obvious" solution doesn't work,
> namely storing a cache of the aggregate in the table information.

Well, that solution also doesn't work if you use a WHERE condition or JOIN, 
now does it?

> So what would it take to implement this for "all" aggregates? Where I think
> "all" really just means min(), max(), first(), last().

Um, what the heck are first() and last()?   These are not supported aggregates 
... table rows are *not* ordered.

> For min() and max() it would have to indicate not only that only the first
> or last record is necessary but also the sort order to impose.

I think Tom already suggested this based on adding a field to CREATE 
AGGREGATE.  But I think implementation isn't as simple as you think it is.

> Now the problem I see is if there's no index on the sort order imposed, and
> the previous step wasn't a merge join or something else that would return
> the records in order then it's not necessarily any faster to sort the
> records and return only some. It might be for small numbers of records, but
> it might be faster to just read them all in and check each one for min/max
> the linear way.

Yes, Tom mentioned this also.  Working out the rules whereby the planner could 
decide the viability of index use is a non-trivial task.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Bruno Wolff III
On Tue, Sep 09, 2003 at 12:54:04 -0400,
  Greg Stark <[EMAIL PROTECTED]> wrote:
> 
> So what would it take to implement this for "all" aggregates? Where I think
> "all" really just means min(), max(), first(), last().

There can be other aggregates where indexes are helpful. The case of interest
is when functions such that if the new item is contains the current value
of the aggregate then the new value of the aggregate with be that of the
current item. This allows you to skip looking at all of the other items
contained in the current item. Dual problems can also benefit in a similar
manner. In a case where the set is totally ordered by the contains index
(as is the case or max and min) then the problem is even simpler and you
can use the greatest or least element as appropiate.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Thomas Swan
Tom Lane wrote:

>"scott.marlowe" <[EMAIL PROTECTED]> writes:
>  
>
>>On Mon, 8 Sep 2003, Neil Conway wrote:
>>
>>
>>>As was pointed out in a thread a couple days ago, MIN/MAX() optimization
>>>has absolutely nothing to do with MVCC. It does, however, make
>>>optimizing COUNT() more difficult.
>>>  
>>>
>
>  
>
>>Not exactly.  While max(id) is easily optimized by query replacement, 
>>more complex aggregates will still have perfomance issues that would not 
>>be present in a row locking database.  i.e. max((field1/field2)*field3) is 
>>still going to cost more to process, isn't it?
>>
>>
>
>Er, what makes you think that would be cheap in any database?
>
>Postgres would actually have an advantage given its support for
>expressional indexes (nee functional indexes).  If we had an optimizer
>transform to convert MAX() into an index scan, I would expect it to be
>able to match up max((field1/field2)*field3) with an index on
>((field1/field2)*field3).
>  
>

Would it be possible to rewrite min and max at the parser level into a
select/subselect  (clause) condition ( repeat condition ) order by
(clause ) descending/ascending limit 1 and thereby avoiding the
penalties of altering the default aggregate behavior?  Would it yield
anything beneficial?



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Bruno Wolff III
On Tue, Sep 09, 2003 at 14:06:56 -0500,
  Thomas Swan <[EMAIL PROTECTED]> wrote:
> 
> Would it be possible to rewrite min and max at the parser level into a
> select/subselect  (clause) condition ( repeat condition ) order by
> (clause ) descending/ascending limit 1 and thereby avoiding the
> penalties of altering the default aggregate behavior?  Would it yield
> anything beneficial?

That isn't always going to be the best way to do the calculation. If there
are other aggregates or if the groups are small, doing things the normal
way (and hash aggregates in 7.4 will help) can be faster.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[PERFORM] Reading data in bulk - help?

2003-09-09 Thread Chris Huston
I've got an application that needs to chunk through ~2GB of data. The 
data is ~7000 different sets of 300 records each. I put all of the data 
into a postgres database but that doesn't look like its going to work 
because of how the data lives on the disk.

When the app runs on a 500 Mhz G4 the CPU is 30% idle... the processing 
application eating about 50%, postgres taking about 10%. I don't know 
how to tell for sure but it looks like postgres is blocking on disk i/o.

For a serial scan of the postgres table (e.g. "select * from 
datatable"), "iostat" reports 128K per transfer, ~140 tps and between 
14 and 20 MB/s from disk0 - with postgres taking more than 90% CPU.

If I then run a loop asking for only the 300 records at a time (e.g. 
"select from datatable where group_id='123'"), iostat reports 8k per 
transfer, ~200 tps, less than 1MB/s throughput and postgres taking ~10% 
CPU. (There is an index defined for group_id and EXPLAIN says it's 
being used.)

So I'm guessing that postgres is jumping all over the disk and my app 
is just waiting on data. Is there a way to fix this? Or should I move 
to a scientific data file format like NCSA's HDF?

I need to push new values into each of the 7000 datasets once or twice 
a day and then read-process the entire data set as many times as I can 
in a 12 hour period - nearly every day of the year. Currently there is 
only single table but I had planned to add several others.

Thanks,
- Chris
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [PERFORM] Reading data in bulk - help?

2003-09-09 Thread Josh Berkus
Chris,

> I've got an application that needs to chunk through ~2GB of data. The 
> data is ~7000 different sets of 300 records each. I put all of the data 
> into a postgres database but that doesn't look like its going to work 
> because of how the data lives on the disk.

Your problem is curable through 4 steps:

1) adjust your postgresql.conf to appropriate levels for memory usage.

2) if those sets of 300 are blocks in some contiguous order, then cluster them 
to force their physical ordering on disk to be the same order you want to 
read them in.   This will require you to re-cluster whenever you change a 
significant number of records, but from the sound of it that happens in 
batches.

3) Get better disks, preferrably a RAID array, or just very fast scsi if the 
database is small.If you're budget-constrained, Linux software raid (or 
BSD raid) on IDE disks is cheap.  What kind of RAID depends on what else 
you'll be doing with the app; RAID 5 is better for read-only access, RAID 1+0 
is better for read-write.

4) Make sure that you aren't dumping the data to the same disk postgreSQL 
lives on!   Preferably, make sure that your swap partition is on a different 
disk/array from postgresql.   If the computing app is complex and requires 
disk reads aside from postgres data, you should make sure that it lives on 
yet another disk.   Or you can simplify this with a good, really large 
multi-channel RAID array.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow plan for min/max

2003-09-09 Thread Matt Clark
> > Know what we (OK, I) need?  An explicitly non-aggregate max() and min(),
> > implemented differently, so they can be optimised.
>
> Not per se.  The way I've been visualizing this is that we add to
> pg_aggregate a column named, say, aggsortop, with the definition:
...snip of cunning potentially geralisable plan...
> How do you structure the resulting query plan, if it's at all complex
> (think multiple aggregate calls...)?  I'm not clear on the answers to
> any of those questions, so I'm not volunteering to try to code it up ...

So, you're not going to code it, I'm not going to code it, I doubt anyone
else is soon.

The issue is going to remain then, that max() and min() are implemented in a
way that is grossly counterintuitively slow for 99% of uses.  It's not bad,
or wrong, just a consequence of many higher level factors.  This should
therefore be very prominently flagged in the docs until there is either a
general or specific solution.

FYI I have rewritten 4 queries today to work around this (with nice
performance benefits) as a result of this thread.  Yeah, I should have
spotted the _silly_ seq scans beforehand, but if you're not looking, you
don't tend to see.  Best improvement is 325msec to 0.60msec!

I'm happy to do the doc work.

M


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Hardware recommendations to scale to silly load

2003-09-09 Thread Bruce Momjian
Matt Clark wrote:
> > Just a data point, but on my Dual Xeon 2.4Gig machine with a 10k SCSI
> > drive I can do 4k inserts/second if I turn fsync off.  If you have a
> > battery-backed controller, you should be able to do the same.  (You will
> > not need to turn fsync off --- fsync will just be fast because of the
> > disk drive RAM).
> >
> > Am I missing something?
> 
> I think Ron asked this, but I will too, is that 4k inserts in
> one transaction or 4k transactions each with one insert?
> 
> fsync is very much faster (as are all random writes) with the
> write-back cache, but I'd hazard a guess that it's still not
> nearly as fast as turning fsync off altogether.  I'll do a test
> perhaps...

Sorry to be replying late.  Here is what I found.

fsync on
Inserts all in one transaction 3700 inserts/second
Inserts in separate transactions870 inserts/second

fsync off
Inserts all in one transaction 3700 inserts/second
Inserts all in one transaction 2500 inserts/second

ECPG test program attached.

--

  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073
/*
 *  Thread test program
 *  by Philip Yarra
 */


#include 

voidins1(void);

EXEC SQL BEGIN DECLARE SECTION;
char   *dbname;
int iterations = 10;
EXEC SQL END DECLARE SECTION;

int
main(int argc, char *argv[])
{

if (argc < 2 || argc > 3)
{
fprintf(stderr, "Usage: %s dbname [iterations]\n", argv[0]);
return 1;
}
dbname = argv[1];

if (argc == 3)
iterations = atoi(argv[2]);
if (iterations % 2 != 0)
{
fprintf(stderr, "iterations must be an even number\n");
return 1;
}

EXEC SQL CONNECT TO:dbname AS test0;

/* DROP might fail */
EXEC SQL AT test0 DROP TABLE test_thread;
EXEC SQL AT test0 COMMIT WORK;
EXEC SQL AT test0 CREATE TABLE test_thread(message TEXT);
EXEC SQL AT test0 COMMIT WORK;
EXEC SQL DISCONNECT test0;

ins1();

return 0;
}

void
ins1(void)
{
int i;
EXEC SQL WHENEVER sqlerror sqlprint;
EXEC SQL CONNECT TO:dbname AS test1;
EXEC SQL AT test1 SET AUTOCOMMIT TO ON;

for (i = 0; i < iterations; i++)
EXEC SQL AT test1 INSERT INTO test_thread VALUES('thread1');
//  EXEC SQL AT test1 COMMIT WORK;

EXEC SQL DISCONNECT test1;

printf("thread 1 : done!\n");
}



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])