Hi all. Many of you might have already noticed this issue. For the rest, I suggest reading http://gcc.gnu.org/bugzilla/show_bug.cgi?id=8610
Let me try to summarize the problem: In debian/testing (sarge) there are two different, possibly coexisting, g++ compilers, the 2.95 flavor and the 3.3 flavor. Sarge defaults to the 3.3 flavor (i.e. g++ means g++-3.3) Each flavor links against a different C++ standard library. The implementation used by g++-3.3 (packaged as libstdc++5-3.3*) is a complete, from scratch rewrite of the standard library, and apparently lacks some features that were included in the old library. Among others, large file support has disappeared from the library, and it goes unnoticed, unless one carefully tests streams for goodness before reading/writing. So if you have C++ applications that manipulate large files, and recompile them with g++-3.3, you might end up with massive data corruption. The gcc team has already fixed the bug, and the fix will ship with the 3.4 release. Assuming you have LFS support in the kernel (Linux 2.4), you might want to try the following: # build a large file that in fact occupies only a few disk blocks echo 12 > large_file dd if=/dev/zero of=large_file bs=1k count=1 seek=3000k # make C++ code for reading it: cat > testlfs.cc << EOF #include <fstream> #include <iostream> int main() { int i = -1; std::ifstream f("large_file"); std::cout << "File is good? " << f.good() << std::endl; if ( f.good() ) f >> i; std::cout << "Value of i? " << i << std::endl; } EOF # try different compilers: g++-3.3 -Wall -D_FILE_OFFSET_BITS=64 testlfs.cc -o testlfs-3; ./testlfs-3 g++-2.95 -Wall -D_FILE_OFFSET_BITS=64 testlfs.cc -o testlfs-2; ./testlfs-2 Best regards. Giuseppe Bonacci -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]