Does anyone remember when linux smbfs (or cifs) gained large file
(>2GB, >4GB) file support?
The Linux CIFS client implementation has always had large file support (cifs.ko
was added to the kernel first in 2.5.42), although of course some old server's do
not support large (>
Does anyone remember when linux smbfs (or cifs) gained large file
(>2GB, >4GB) file support?
At least most 2.2.x didn't have it (were there 2.2 smbfs LFS patches?)
Was 2.4 the first kernel to support large files on smbfs?
-- v --
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the
Mike Houston wrote:
> On Thu, 08 Sep 2005 21:27:42 +0200
> Andreas Baer <[EMAIL PROTECTED]> wrote:
>
>
>
>>I think it's 2TB for the file size and 2e73 for the file system, but
>>I don't understand the second reference and the part about the
>>CONIFG_LBD. What is exactly the CONFIG_LBD option?
On Thu, 08 Sep 2005 21:27:42 +0200
Andreas Baer <[EMAIL PROTECTED]> wrote:
> I think it's 2TB for the file size and 2e73 for the file system, but
> I don't understand the second reference and the part about the
> CONIFG_LBD. What is exactly the CONFIG_LBD option?
> -
This is "Support for Large B
I have a question about the Large File Support using Linux and glibc 2.3
on a 32-Bit machine. What's the correct limit for the file size and the
file system using LFS (just for the kernel, not to mention filesystem
limits etc)?
I found two references:
"The 2.6 kernel imposes its own
Followup to: <[EMAIL PROTECTED]>
By author:Felix von Leitner <[EMAIL PROTECTED]>
In newsgroup: linux.dev.kernel
>
> I can't copy a file larger than 2 gigs to my vfat partition.
> What gives? 2.4.4-ac5 kernel. My cp copies 2 gigs and then aborts.
>
> $ echo foo >> file_on_vfat_partition
>
I can't copy a file larger than 2 gigs to my vfat partition.
What gives? 2.4.4-ac5 kernel. My cp copies 2 gigs and then aborts.
$ echo foo >> file_on_vfat_partition
causes the shell to become unresponsive and consume lots of CPU time.
Felix
-
To unsubscribe from this list: send the line "un
On Thu, 16 Nov 2000, Andreas S. Kerber wrote:
> We need to handle files which are about 10GB large.
> Is there any way to do this with Linux? Some pointers would be nice.
Install a kernel / glibc that handles LFS. Search for LFS on Freshmeat,
you'll end up with the right patch.
You'll probably
> Andreas Jaeger writes:
> Andreas S Kerber writes:
>> We need to handle files which are about 10GB large.
>> Is there any way to do this with Linux? Some pointers would be nice.
> Yes, with recent 2.4 kernels or a patched 2.2 kernel - and a
> recompiled glibc. For details check:
Upp
> Andreas S Kerber writes:
> We need to handle files which are about 10GB large.
> Is there any way to do this with Linux? Some pointers would be nice.
Yes, with recent 2.4 kernels or a patched 2.2 kernel - and a
recompiled glibc. For details check:
http://www.suse.de/~aj/linux-lfs.html
We need to handle files which are about 10GB large.
Is there any way to do this with Linux? Some pointers would be nice.
Andreas
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
> Ugh... yes, but not with an 80386, i486, Pentium, Pentium-MMX,
> 5x86, Crusoe, WinChip, K6, K6-2, or 6x86. Also not with XT disks
> or anything off the EISA, VLB, and MCA busses.
Lots of people are building terabyte sized arrays on K6 type boxes. A PII
or Athlon is just overkill for the job
Al
Rik van Riel writes:
> On Mon, 4 Sep 2000, Stephen C. Tweedie wrote:
>> On Fri, Sep 01, 2000 at 09:16:23AM -0700, Linda Walsh wrote:
>>> With all the talk about bugs and slowness on a 386/486/586
>>> -- does anyone think those platforms will have multi-T disks
>>> hooked up to them?
Note: no "68
On Mon, 4 Sep 2000, Stephen C. Tweedie wrote:
> On Fri, Sep 01, 2000 at 09:16:23AM -0700, Linda Walsh wrote:
>
> > With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> > think those platforms will have multi-T disks hooked up to them?
>
> Yes. They are already doing it, a
Hi,
On Thu, Aug 31, 2000 at 05:59:09PM -0400, Richard B. Johnson wrote:
> Long long things, even it they work well, are not very nice on 32 bit
> machines. For the time being, I'd advise increasing cluster size rather
> than using 64 bit values.
Doesn't help, because we're talking about numbers
Hi,
On Fri, Sep 01, 2000 at 09:16:23AM -0700, Linda Walsh wrote:
> With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> think those platforms will have multi-T disks hooked up to them?
Yes. They are already doing it, and the number of people trying is
growing rapidly. I
Hi,
On Fri, Sep 01, 2000 at 01:30:26PM +0300, Matti Aarnio wrote:
>
> Stephen, could you have a moment to look at the struct buffer_head {}
> alignment matters ? And possible configure time change to make the
> block number possibly a 'long long' variable ?
> Changeing field order mi
Hi,
On Fri, Sep 01, 2000 at 12:09:23AM -0700, Linda Walsh wrote:
> Perhaps an "easy" way to go would be to convert block numbers to
> type "block_nr_t", then one could measure the difference that 10's of
> nanoseconds make against seeks and reads of disk data.
You might not find it just taki
From: Daniel Phillips <[EMAIL PROTECTED]>
Date:Fri, 01 Sep 2000 20:49:14 +0200
Curiously, this field is measured in 512 byte units, giving a 2TB Ext2
filesize limit. That's starting to look uncomfortably small - I can
easily imagine a single database file wanting to be big
> >
> > Tsk. Showing my age and ignorance, I guess. I was using the gcc -v trick back
> > at Auspex in '93. ...Guess the compiler driver has gotten smarter since.
> > You know how it goes- you do a trick once- you don't change it for years...
>
> According to the ChangeLog of the 2.7.2.3 compile
On Fri, Sep 01, 2000 at 12:01:39PM -0700, Matthew Jacob wrote:
> >
> > Or use --print-libgcc-file-name:
> >
> > `gcc --print-libgcc-file-name`
> >
> > where are the options normally used to compile code (ie, for example
> > on machines that optionally do not have a floating point use, add
>
> Or use --print-libgcc-file-name:
>
> `gcc --print-libgcc-file-name`
>
> where are the options normally used to compile code (ie, for example
> on machines that optionally do not have a floating point use, adding
> -msoft-float would select the libgcc.a that was compiled with -msoft-
On Fri, Sep 01, 2000 at 10:34:19AM -0700, Matthew Jacob wrote:
> > On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> > > So what do you propose to use when a long long division is needed (after
> > > much thought and considering all alternatives etc.etc.) ?
> >
> > Link against libgc
Alexander Viro wrote:
>
> On Fri, 1 Sep 2000, Daniel Phillips wrote:
>
> > Linda Walsh wrote:
> > > It may not matter too too much, but blocks are being passed around as
> > > 'ints'. On the ia32 architecture, this implies a maximum of 512*2G->1T
> > > disk size. Probably don't need to worry a
On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> The previous analysis was not quite right though (%cl is actually loaded,
> just %eax gets bogus input from the long long shift)
Perhaps, but it's sure not obvious:
bh->b_blocknr = (long)mp->pbm_bn +
(mp->pbm_
> On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> > So what do you propose to use when a long long division is needed (after
> > much thought and considering all alternatives etc.etc.) ?
>
> Link against libgcc, what else?
As also does anyone who does solaris drivers (x86 or sparc
> 41-bit filesize should be enough for the 32-bit machines.
>
> By the time people start using >41-bit files, don't you think
> they'll have an AMD-64, PPC or Merced CPU to handle the bigger
> file sizes?
Nope
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
th
On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> So what do you propose to use when a long long division is needed (after
> much thought and considering all alternatives etc.etc.) ?
Link against libgcc, what else?
We should have been doing that since the beginning instead of
making
On Fri, Sep 01, 2000 at 12:09:23AM -0700, Linda Walsh wrote:
> Perhaps an "easy" way to go would be to convert block numbers to
> type "block_nr_t", then one could measure the difference that 10's of
> nanoseconds make against seeks and reads of disk data.
That's not the issue. The issue is t
On Fri, 1 Sep 2000, Daniel Phillips wrote:
> Linda Walsh wrote:
> > It may not matter too too much, but blocks are being passed around as
> > 'ints'. On the ia32 architecture, this implies a maximum of 512*2G->1T
> > disk size. Probably don't need to worry about this today, but in a few
> > y
On Fri, 1 Sep 2000, Alan Cox wrote:
> > With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> > think those platforms will have multi-T disks hooked up to them?
>
> Yes. The poor handling of 64bit numbers hasnt gone away on
> PentiumII or Athlon as far as I can tell
41-bit
On Fri, Sep 01, 2000 at 09:16:23AM -0700, Linda Walsh wrote:
> With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> think those platforms will have multi-T disks hooked up to them?
Propably not.
...
> If you changed all the block number definitions to use 'block_nr_
> What I'd like to add is: while we're at it, how about losing the 512
> byte magic multiplier and go with the filesystem block size? That way
> Ext2 file size automatically goes up by a factor of 8 every time we
> manage to double the filesystem block size (blocksize*2 and triple
> indirect => 2
Linda Walsh wrote:
> It may not matter too too much, but blocks are being passed around as
> 'ints'. On the ia32 architecture, this implies a maximum of 512*2G->1T
> disk size. Probably don't need to worry about this today, but in a few
> years? Should we be changing the internal interfaces to
> With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> think those platforms will have multi-T disks hooked up to them?
Yes. The poor handling of 64bit numbers hasnt gone away on PentiumII or Athlon
as far as I can tell
-
To unsubscribe from this list: send the line "unsub
With all the talk about bugs and slowness on a 386/486/586 -- does anyone
think those platforms will have multi-T disks hooked up to them?
Now bugs in the compiler are a problem, but at some point in the future, one
would hope we could move to a compiler that can handle division w/no
problems.
On Fri, 1 Sep 2000, Matti Aarnio wrote:
> On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> > > ( Not 'unsigned long long' )
> >
> > The shift on pbm_offset operates on long long.
>
> Uh, somehow I thought the reference was about bh->b_blocknr;
> Ok, never mind.
>
>
On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> > ( Not 'unsigned long long' )
>
> The shift on pbm_offset operates on long long.
Uh, somehow I thought the reference was about bh->b_blocknr;
Ok, never mind.
> The previous analysis was not quite right though (
On Fri, Sep 01, 2000 at 04:01:43PM +0300, Matti Aarnio wrote:
> On Fri, Sep 01, 2000 at 02:44:04PM +0200, Andi Kleen wrote:
> > > To my knowlege it's only been speed related issues, not
> > > correctness issues, that have been the cause for the
> > > fear and loathing of long long.
> >
> > There
On Fri, Sep 01, 2000 at 02:44:04PM +0200, Andi Kleen wrote:
> > To my knowlege it's only been speed related issues, not
> > correctness issues, that have been the cause for the
> > fear and loathing of long long.
>
> There are several parts of XFS which do not compile correctly with gcc
> 2.95.2,
On Thu, Aug 31, 2000 at 09:50:35PM -0700, Richard Henderson wrote:
> On Fri, Sep 01, 2000 at 12:16:38AM +0300, Matti Aarnio wrote:
> > Also (I recall) because GCC's 'long long' related operations
> > and optimizations have been buggy in past, and there is no
> > sufficient experience t
On Fri, 1 Sep 2000, Linda Walsh wrote:
> Perhaps an "easy" way to go would be to convert block numbers to
> type "block_nr_t", then one could measure the difference that 10's of
> nanoseconds make against seeks and reads of disk data.
True for DOS.
On Linux, most file operations are done in R
Stephen, could you have a moment to look at the struct buffer_head {}
alignment matters ? And possible configure time change to make the
block number possibly a 'long long' variable ?
Changeing field order might be doable now, while I definitely think that
changeing blocknumber vari
nology, Core Linux, SGI
[EMAIL PROTECTED] | Voice: (650) 933-5338
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Richard
> Henderson
> Sent: Thursday, August 31, 2000 9:51 PM
> To: Matti Aarnio
>
On Fri, Sep 01, 2000 at 12:16:38AM +0300, Matti Aarnio wrote:
> Also (I recall) because GCC's 'long long' related operations
> and optimizations have been buggy in past, and there is no
> sufficient experience to convince him that they work now better
> with the recommended
> And the below is what percentage of time doing disk i/o?
but most file operations don't do physical IO.
> > it again! It doesn't scale well. The long long code is nearly 10 times
> > slower! You can do `gcc -S -o xxx name.c` and see why.
it's silly to talk about unoptimized code. and to spur
And the below is what percentage of time doing disk i/o?
> Just put this in a loop and time it. Change SIZE to long long, and do
> it again! It doesn't scale well. The long long code is nearly 10 times
> slower! You can do `gcc -S -o xxx name.c` and see why.
>
>
> #define SIZE long
>
> SIZE
On Thu, 31 Aug 2000, Linda Walsh wrote:
> > It is propably from reasoning of:
> >
> > "there is really no point in it, as at 32bit systems
> > int and long are same size, thus same limit comes
> > with both types."
> >
> > At 64-bit machines ther
On Thu, Aug 31, 2000 at 01:46:36PM -0700, Linda Walsh wrote:
> > It is propably from reasoning of:
> >
> > "there is really no point in it, as at 32bit systems
> > int and long are same size, thus same limit comes
> > with both types."
> >
> > At
> It is propably from reasoning of:
>
> "there is really no point in it, as at 32bit systems
>int and long are same size, thus same limit comes
>with both types."
>
> At 64-bit machines there is, of course, definite difference.
---
> Some underlying block device subsystems can address that
> currently, some others have inherent 512 byte "page_size"
> with signed indexes... I think SCSI is in the first camp,
> while IDE is in second. (And Ingo has assured us that RAID
> code should handle thi
On Thu, Aug 31, 2000 at 01:13:09PM -0700, Linda Walsh wrote:
> Ooopsthe time frame is closer to today on part of this.
> While it may be a while before we hit the 1T limit on 1 single device,
> things like readpage, do so based of the inode -- which on a metadisk
> could have a filesize much l
Ooopsthe time frame is closer to today on part of this. While it may
be a while before we hit the 1T limit on 1 single device, things like
readpage, do so based of the inode -- which on a metadisk could have a
filesize much larger than current physical device limits. So it seems
that at leas
It may not matter too too much, but blocks are being passed around as
'ints'. On the ia32 architecture, this implies a maximum of 512*2G->1T
disk size. Probably don't need to worry about this today, but in a few
years? Should we be changing the internal interfaces to use a long (or
a long unsig
54 matches
Mail list logo