> It is propably from reasoning of:
>
> "there is really no point in it, as at 32bit systems
> int and long are same size, thus same limit comes
> with both types."
>
> At 64-bit machines there is, of course, definite difference.
---
There ya go again -- confusing the issue with facts. Compare
and contrast the appropriate 32 and 64 bit values. Using a 'long long'
on ia32 for comparison vs. an int.
Yes, with larger cluster sizes you can get larger virtual file
sizes, but that's only pushing the problem a bit further back. Is
it possible, I think there's an IRIX product to do this, CXFS' which
can take an existing HD and allow you add-on more Hard disks to form
a bigger and bigger drive. So -- maybe this is flawed, but I could easily
see the mistake of starting out with a 1 K block size on your first HD,
but then not being able to 'grow' it beyond a 2T limit. Some of our
larger customers have had files larger than 2T -- customers exist
today that use that functionality -- I have no idea what blocksize
they use -- depending on their usage -- 4K might be reasonable, but
if what you are doing is skipping around in a large database file and
doing alot of small reads, 4K blocksizes might be less efficient.
If our customers were doing this over a year ago (which they were),
it's not gonna be long before the practice spreads....growth is a virus...
it keeps spreading...:-)
-l
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/