>I have not found a clear answer
>Everyone have his own impression :-)
>What is file size limit in DBF format ?
>and in DBF-(x)Harbour ?
>and in DBF-ADS 6.2x ?
>I know is related to FS limit too ( FAT32: 4Gb, NTFS:... )
Direct tests gave many answers
Note: I do not have problems with 2 Gb limit in DBF files, but some
users requested me about this problem, crossing 2 Gb they have errors
and need to split DBF files to avoid errors and index corruption
Tests with different sections of code below have expected results:
---------------
// REQUEST _DBFCDX
REQUEST DBFCDX
#include "set.ch"
#include "dbinfo.ch"
function Main()
LOCAL nTotRec, nRec
rddsetdefault( "DBFCDX" )
//
dbcreate( "mytest", ;
{{ "field1", "C", 20, 0 }, ;
{ "field2", "C", 200, 0 }, ;
{ "field3", "D", 8, 0 }, ;
{ "field4", "M", 10, 0 } ;
} )
USE mytest
INDEX ON field1 TO mytest
nTotRec := INT( ( 4500000*1000 - header() ) / recsize() )
? nTotRec
nRec := 0
WHILE nRec <= nTotRec
dbappend()
REPLACE FIELD1 WITH STR(SECONDS())
REPLACE FIELD4 WITH STR(SECONDS())
DBRUNLOCK()
nRec++
ENDDO
//
use mytest index mytest
/*
APPEND FROM mytest2
*/
? INT( ( 4500000*1000 - header() ) / recsize() )
? "lastrec()", lastrec()
? "SET( _SET_MFILEEXT )", SET( _SET_MFILEEXT )
? "SET( _SET_MBLOCKSIZE )", SET( _SET_MBLOCKSIZE )
? "SET( _SET_AUTORDER )", SET( _SET_AUTORDER )
? "SET( _SET_AUTOPEN )", SET( _SET_AUTOPEN )
? "DBINFO( DBI_MEMOBLOCKSIZE )", DBINFO( DBI_MEMOBLOCKSIZE )
dbgoto( lastrec()-10 )
? recno()
dbgoto( int(lastrec()/2) )
? recno()
// inkey(0)
// dbedit()
nRec := FIELDGET(1)
DBGOBOTTOM()
DBSEEK( nRec )
? recno(), nRec
// inkey(0)
// dbedit()
DBSKIP()
? recno()
return nil
----------------
and expected are (all with DBFCDX, BCC551, WinXPPsp2) :
a) NTFS. Bigger than 4 Gb ( 2**32 )
15/01/2010 09:16 p.m. 4,500,000,191 mytest.dbf
15/01/2010 09:16 p.m. 1,205,021,952 mytest.fpt
15/01/2010 09:16 p.m. 104,111,104 mytest.cdx
b) FAT32. Limited to 4 Gb due FS ( 4,294,967,296 )
15/01/2010 09:52 p.m. 4,294,967,109 mytest.dbf
15/01/2010 09:52 p.m. 1,150,117,760 mytest.fpt
15/01/2010 09:52 p.m. 99,118,080 mytest.cdx
Fail where must fail with a beautiful message:
Error DBFCDX/1011 Write error: mytest.dbf (DOS Error 112)
c) HPFS. Not tested due lack of space. Must fail in 2 Gb
d) Linux (Ext2, Ext3, Ext4, JFS, ... ). Not tested due lack of space
Must fail in 2 Gb in older kernels and bigger than 4 Gb in others
e) Results with Clipper 5.3a, as expected (not create/add, just values
and record pointer "movement")
18828451
lastrec() 0
SET( _SET_MFILEEXT )
SET( _SET_MBLOCKSIZE ) 64
SET( _SET_AUTORDER ) 0
SET( _SET_AUTOPEN ) .T.
DBINFO( DBI_MEMOBLOCKSIZE ) 64
1
1
1
1
f) Results with Harbour
NTFS:
------
18828451
lastrec() 18828452
SET( _SET_MFILEEXT )
SET( _SET_MBLOCKSIZE ) 0
SET( _SET_AUTORDER ) 0
SET( _SET_AUTOPEN ) .T.
DBINFO( DBI_MEMOBLOCKSIZE ) 64
18828442
9414226
9414200 75512.13
9414201
------
FAT32:
------
18828451
lastrec() 17970573
SET( _SET_MFILEEXT )
SET( _SET_MBLOCKSIZE ) 0
SET( _SET_AUTORDER ) 0
SET( _SET_AUTOPEN ) .T.
DBINFO( DBI_MEMOBLOCKSIZE ) 64
17970563
8985286
8985280 77607.47
8985281
------
g) Why Clipper and Harbour show different ? :
SET( _SET_MBLOCKSIZE ) 64
SET( _SET_MBLOCKSIZE ) 0
It must be fixed ?
h) Tests with APPEND FROM cross 2 Gb value without problem, and record
pointer movement work fine, including DBSEEK()
This test was for check report of Fred Seyffert (RE: DBFCDX Issues)
As we have:
1) DBF files larger than 2 Gb do not fail
2) Record pointer movement does not fail in these files, including DBSEEK()
3) APPEND FROM / record pointer movement does not fail
4) Users confirm these kind of errors happen crossing 2 Gb,
for example Fred Seyffert:
========================
I'm having the continuation of a long term problem with large data file
(>2gb) read errors. Mostly I get 1010's from standard errorblock trapping,
but sometimes get
------------------------------------------------------------------------
Application Internal Error -app.exe
Terminated at: 2010.01.15 12:39:02
Unrecoverable error 9201: hb_cdxPageSeekKey: wrong parent key.
Called from DBSEEK(0)
[...]
I've been wrestling with this for months now, and would appreciate some
steering?
========================
then I think these errors happen in other scenarios/environments
different to my tests, where more complex operations and movements are
involved, including differents flows of use
Perhaps some kind of memory corruption, limits in some functions (
DBSEEK(), DBGOTO(), ... ) dealing with values greater than 2**31 under
certain circunstances ? Compiler issues ?
David Macias
_______________________________________________
Harbour mailing list (attachment size limit: 40KB)
Harbour@harbour-project.org
http://lists.harbour-project.org/mailman/listinfo/harbour