I have a procedure that "upload" a complete set of dbfs to a
Postgresql database.
It's based on a data dictionary that hold tables, sql indexes, sql
scripts and is usually run every night.

The loop is:

for every table
...
use <table>
__dbSql to a text file
if remote
   ftp upload
endif
create the sql script to "import" that text file and create indexes
exec the sql script
...
next
...
exec many sql scripts to sum and create dw tables.

Since with MT copy, ftp, sql steps could be "pipelined" I've tested
this possibility.
I've moved the single table process into a function called using
hb_threadStart and I added an  hb_threadWaitForAll() after the loop to
wait all the uploads before continue.

In this way all the tables ( in the test 80 for 4G of dbfs/cdxs ) have
been processed at the same time ( I used a local database, without the
ftp step ).

Everything worked as expected!

On a single PC this was was more a test of the disk reliability :) but
at the end all the tables were uploaded and all the sql scripts were
executed.

However I've found that even a "small" change like that required a
major change in the code. Not in the function itself but in all that
functions that are called in the middle.
Things like data driven app and "static file wide vars + set/set
functions" may become a problem.

It seems that MT imposes a rethink of many acquired patterns.

Good, time to go back to books :)

best regards,
Lorenzo
_______________________________________________
Harbour mailing list
Harbour@harbour-project.org
http://lists.harbour-project.org/mailman/listinfo/harbour

Reply via email to