On Thu, Oct 28, 1999 at 03:34:53PM -0700, Matthew Dillon wrote:
> :OK, so I know now that I can have pretty large files in the Terabyte range.
> :Very nice. But I assume I cannot mmap anything like a 100 GB file ?
> :
> :Michael
>
> Intel cpu's only have a 4G address space. Your are limited to around
> a 2G mmap()ing. You can mmap() any portion of a larger file but you
> cannot mmap() the whole file at once.
>
> The easiest thing to do is to simply create a number of fixed-sized files
> and tell CNFS to use them.
Here is the problem:
When you want to have 500 GB of storage, you will need 250 files. In the current
implementation of nnrpd, this will need 250 file descriptors per nnrpd. This will
limit the number of readers that can be supported on a system, because a nnrpd is
spawned for each reader. I was told that nnrpd can be hacked to only consume file
descriptors when really needed, but itīs supposed to have a performance penalty.
Thatīs why Iīm looking for a way of having large mmapīable files. Are you saying
that ALL Intel CPUs, including PIII, can only address 4 GB? I probably need to
look at other architectures or solve this fd problem.
Michael
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message