What you say is that "it works for me" which is not an indication
of quality, nor design, nor proven stability, etcetc.

TBROWSE() comes to mind as an example, which also "worked for me"
for a lot of ppl, yet it had to be reimplemented from scratch
(which a huge amount of testing and work investment) to
*really* make it work, not "just for me" but for *all* possible
uses and users with full Clipper compatibility.

I don't doubt LetoDB can work in many situations and for many
users and it's a very nice effort and project on its own, but
cited issues are real ones. If we had disregarded such issues
in other places of Harbour, our project would be nowhere it is
now.

Brgds,
Viktor

On 2009 Oct 5, at 09:37, Alexandr Okhotnikov wrote:

Hi

2009/10/4 Przemyslaw Czerpak <dru...@acn.waw.pl>:

Rather for people who do not want to move his code to server side.
ADS is only partial solution and much better and more efficient is
moving whole application to the server and execute it remotely.

50-100 councurrent access

1. very large load on the server
2. Given the current local machine (very productive) - why do we need
a decision like Terminal-server?
3. not everything can be transferred to the server (browse...)

The main types are (my opinion that this long enough), add the rest
later (for DBF, they are not critical)

These are only your personal preferences.

I wrote that this is only my opinion

SEEK: for numeric fields to translate a string based on the dimension
of the field (no loss of accuracy)

You do not understand the problem. It is problem and it's very serious bug.

For SEEK, I do not see the problem. What exactly I do not understand?

7. it does not respect many of _SET_* settings
I think to finish (or customize a specific case) will not take much time

how it should be done to keep original author vision which is unknown.
I can imagine at least three different implementation which will give
different final functionality. Some implementation strongly depends on used communication protocols some others may badly work with .hrb modules automatically registered on server side by clients (i.e. with UDFs used
by client in filter or index expressions), etc.
I do not know what is the final goal of LetoDB development and to implement it I will have to take some arbitrary decisions which may create problems
in the future.

What exactly would the author - one must ask the author. In any case
the project is completed by the author (based on activity over the
past year)
Now we are not talking about the UDF. I would like to, but it means a
change that just lead to incompatibility with the current
implementation

You can use RDD section in CA-Clipper 5.3 Technical Reference.
Harbour RDD model follow very closely Clipper one. There are only
few small differences and extensions and it was the place were
I started before I touched Harbour RDD code.

"few small differences and extensions"  :)

If you want to see what it means in practice then please try the
code below. It shows only thr cost of scope setting/restoring in
current LetoDB (3-rd value).
Here are results in my Linux box and localhost connection.
1. using local file access:
  RDD: DBFCDX _tst2
  creating table...            0.20 sec.
  indexing...                  0.03 sec.
  testing...                   0.07 sec.

2. using NETIO:
  RDD: DBFCDX NET:127.0.0.1:/_tst2
  creating table...            0.72 sec.
  indexing...                  0.14 sec.
  testing...                   0.20 sec.

3. using LetoDB:
  RDD: LETO //127.0.0.1:2812/_tst2
  creating table...            2.43 sec.
  indexing...                  0.05 sec.
  testing...                  86.17 sec.

So in this test LetoDB is  431 _times_ slower then NETIO and believe
me it's not the worst case. If you want you can try to increase number
of records and you will see what will happen.
If you repeat this tests in LAN then network overhead should reduce
the difference but I do not think that LetoDB will be faster.
If you have a while then I'm interesting in your results in LAN.

Key values (for a real application) are three:
1. use SHARED!
2. LAN (not local)
3. Simultaneous access to the table more than one application (even if
the SHARED and one, for instance, windows optimize queries)

My example (two connections to the table):

RDD: DBFCDX \\192.168.170.11\income\_tst2
testing...                   25.58 sec.

RDD: LETO //192.168.170.11:2812/_tst2
testing...                  56.09 sec.


But since I do not use OrdKeyNo () for each record (and what is it
for?), Then I rewrote at SEEK

/*  dbGoTop()
 while !eof()
    x := ordKeyNo()// / ordKeyCount()
    dbSkip()
 enddo*/
 for i := 1 to lastrec()
     seek (  l2bin( i ) + "x" )
     if ! found()
         ? "!!!!!!!!!!!!!!!!!!"
     endif
 next

RDD: DBFCDX \\192.168.170.11\income\_tst2
testing...                   17.59 sec.

RDD: LETO //192.168.170.11:2812/_tst2
testing...                  8.12 sec.


See above test results. How LetoDB works strongly depends on user code. The funny things is that non optimized code which use pure SET FILTER TO without any index scopes settings needs RDD which makes automatic scope
optimization like COMIX or RMDBF*. Otherwise it works very slow using
file IO API in network environment and here switching to LetoDB may help
though not always.
But for code optimized by programmer which expensively use scopes then
switching to LetoDB may cause very serious slownes.

I see other results
Plus, a stable job (do not affect client hangs), in contrast to DBFCDX
and NETIO (even more important than speed)



best regards,
Przemek


#define N_RECCOUNT 10000
REQUEST LETO
REQUEST DBFCDX
field F
proc main(rdd)
  local cFile := "_tst2", fileFunc, x, t
  if empty(rdd)
     cFile := "//127.0.0.1:2812/" + cFile
     rddSetDefault( "LETO" )
     fileFunc:=...@leto_file()
  else
     if upper(rdd)="NET:"
        cFile := "NET:127.0.0.1:/" + cFile
     endif
     rddSetDefault( "DBFCDX" )
     fileFunc:=...@dbexists()
  endif
  ? "RDD:", rddSetDefault(), cFile
  if !fileFunc:exec( cFile + ".dbf" )
     ? padr( "creating table...", 20 )
     t := seconds()
     dbCreate( cFile, {{"F","C",100,0},{"F2","L",1,0}} )
     use (cFile) alias tst
     while lastrec() < N_RECCOUNT
        dbAppend(); F := l2bin( recno() ) + repl( "x", 100 )
     enddo
     t := seconds() - t
     ?? t, "sec."
     ? padr( "indexing...", 20 )
     t := seconds()
     index on F tag T to (cFile)
     t := seconds() - t
     ?? t, "sec."
     close
  endif
  use (cFile) alias tst
  set index to (cFile)
  ? padr( "testing...", 20 )
  t := seconds()
  ordScope( 0, l2bin( 0 ) + "x" )
  dbGoTop()
  while !eof()
     x := ordKeyNo() / ordKeyCount()
     dbSkip()
  enddo
  t := seconds() - t
  ?? t, "sec."
  close
return
_______________________________________________
Harbour mailing list
Harbour@harbour-project.org
http://lists.harbour-project.org/mailman/listinfo/harbour

_______________________________________________
Harbour mailing list
Harbour@harbour-project.org
http://lists.harbour-project.org/mailman/listinfo/harbour

Reply via email to