On 10/5/11 12:54 PM, Richard W.M. Jones wrote:
> On Wed, Oct 05, 2011 at 09:58:59AM -0500, Eric Sandeen wrote:
>> right; for large ext4 fs use (or testing), try
>>
>> # mkfs.ext4 -E lazy_itable_init=1 /dev/blah
>>
>> this will cause it to skip inode table initialization, and speed up
>> mkfs a LOT
On 10/05/2011 05:42 PM, Eric Sandeen wrote:
>> right; for large ext4 fs use (or testing), try
>>
>> # mkfs.ext4 -E lazy_itable_init=1 /dev/blah
>>
>> this will cause it to skip inode table initialization, and speed up mkfs a
>> LOT.
>> It'll also keep sparse test images smaller.
>>
>> IMHO this s
On Wed, Oct 05, 2011 at 10:42:37AM -0500, Eric Sandeen wrote:
> On 10/5/11 9:58 AM, Eric Sandeen wrote:
> > On 10/4/11 6:53 PM, Ric Wheeler wrote:
>
> ...
>
> >> Note that ext4 has a new feature that allows inodes to be initialized in
> >> the
> >> background, so you will see much quicker mkfs.
On Wed, Oct 05, 2011 at 09:58:59AM -0500, Eric Sandeen wrote:
> right; for large ext4 fs use (or testing), try
>
> # mkfs.ext4 -E lazy_itable_init=1 /dev/blah
>
> this will cause it to skip inode table initialization, and speed up
> mkfs a LOT. It'll also keep sparse test images smaller.
>
> IMH
On 10/5/11 9:58 AM, Eric Sandeen wrote:
> On 10/4/11 6:53 PM, Ric Wheeler wrote:
...
>> Note that ext4 has a new feature that allows inodes to be initialized in the
>> background, so you will see much quicker mkfs.ext4 times as well :)
>
> right; for large ext4 fs use (or testing), try
>
> #
On 10/4/11 6:53 PM, Ric Wheeler wrote:
> On 10/04/2011 07:19 PM, Przemek Klosowski wrote:
>> On 10/03/2011 06:33 PM, Eric Sandeen wrote:
>>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
> testing something more real-world (20T
On 10/05/2011 04:01 AM, Farkas Levente wrote:
> On 10/05/2011 12:47 AM, Richard W.M. Jones wrote:
>> On Tue, Oct 04, 2011 at 11:38:18PM +0200, Farkas Levente wrote:
>>> On 10/04/2011 05:30 PM, Eric Sandeen wrote:
>> XFS has been proven at this scale on Linux for a very long time, is all.
>
On 10/05/2011 01:19 AM, Przemek Klosowski wrote:
> On 10/03/2011 06:33 PM, Eric Sandeen wrote:
>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
testing something more real-world (20T ... 500T?) might still be
interesting.
On 10/05/2011 12:47 AM, Richard W.M. Jones wrote:
> On Tue, Oct 04, 2011 at 11:38:18PM +0200, Farkas Levente wrote:
>> On 10/04/2011 05:30 PM, Eric Sandeen wrote:
> XFS has been proven at this scale on Linux for a very long time, is all.
the why rh do NOT support it in 32 bit? there'r
On 10/04/2011 07:19 PM, Przemek Klosowski wrote:
> On 10/03/2011 06:33 PM, Eric Sandeen wrote:
>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
testing something more real-world (20T ... 500T?) might still be
interesting.
On 10/03/2011 06:33 PM, Eric Sandeen wrote:
> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>>> testing something more real-world (20T ... 500T?) might still be
>>> interesting.
>>
>> Here's my test script:
>>
>>qemu-img create -
On Tue, Oct 04, 2011 at 11:38:18PM +0200, Farkas Levente wrote:
> On 10/04/2011 05:30 PM, Eric Sandeen wrote:
> >>> XFS has been proven at this scale on Linux for a very long time, is all.
> >>
> >> the why rh do NOT support it in 32 bit? there're still system that
> >> should have to run on 32 bit
On 10/04/2011 05:30 PM, Eric Sandeen wrote:
>>> XFS has been proven at this scale on Linux for a very long time, is all.
>>
>> the why rh do NOT support it in 32 bit? there're still system that
>> should have to run on 32 bit:-(
>
> 32-bit machines have a 32-bit index into the page cache; on x86,
On 10/4/11 2:09 AM, Farkas Levente wrote:
> On 10/04/2011 01:03 AM, Eric Sandeen wrote:
>> On 10/3/11 5:53 PM, Farkas Levente wrote:
>>> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
On 10/04/2011 03:12 AM, Farkas Levente wrote:
> On 10/04/2011 01:03 AM, Eric Sandeen wrote:
>> Large filesystem support for ext4 has languished upstream for a very
>> long time, and few in the community seemed terribly interested to test it,
>> either.
> why? that's what i simple do not understand!
On Tue, Oct 4, 2011 at 3:09 AM, Farkas Levente wrote:
> On 10/04/2011 01:03 AM, Eric Sandeen wrote:
>> On 10/3/11 5:53 PM, Farkas Levente wrote:
>>> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric S
On 10/04/2011 01:03 AM, Eric Sandeen wrote:
> Large filesystem support for ext4 has languished upstream for a very
> long time, and few in the community seemed terribly interested to test it,
> either.
why? that's what i simple do not understand!?...
--
Levente
On 10/04/2011 01:03 AM, Eric Sandeen wrote:
> On 10/3/11 5:53 PM, Farkas Levente wrote:
>> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
>>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
I wasn't able to give the VM enough memory
100T seems to work for light use.
I can create the filesystem, mount it, write files and directories and
read them back, and fsck doesn't report any problems.
Filesystem Size Used Avail Use% Mounted on
/dev/vda199T 129M 94T 1% /sysroot
Linux (none) 3.1.0-0.rc6.git0.3.fc16.x86_
On Mon, Oct 03, 2011 at 05:33:47PM -0500, Eric Sandeen wrote:
> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
> > At 100T it doesn't run out of memory, but the man behind the curtain
> > starts to show. The underlying qcow2 file grows to several gigs and I
> > had to kill it. I need to play with
On 10/3/11 5:53 PM, Farkas Levente wrote:
> On 10/04/2011 12:33 AM, Eric Sandeen wrote:
>> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>>> I wasn't able to give the VM enough memory to make this succeed. I've
>>> only got 8G on th
On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
> testing something more real-world (20T ... 500T?) might still be interesting.
Here's my test script:
qemu-img create -f qcow2 test1.img 500T && \
guestfish -a test1.img \
memsize 4096 : run : \
part-disk /dev/vda gp
On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>> testing something more real-world (20T ... 500T?) might still be interesting.
>
> Here's my test script:
>
> qemu-img create -f qcow2 test1.img 500T && \
> guestfish -a test1.img
On 10/04/2011 12:33 AM, Eric Sandeen wrote:
> On 10/3/11 5:13 PM, Richard W.M. Jones wrote:
>> On Mon, Oct 03, 2011 at 04:11:28PM -0500, Eric Sandeen wrote:
>> I wasn't able to give the VM enough memory to make this succeed. I've
>> only got 8G on this laptop. Should I need large amounts of memor
On 10/3/11 4:05 PM, Richard W.M. Jones wrote:
> On Mon, Oct 03, 2011 at 03:10:43PM -0500, Eric Sandeen wrote:
>> On 10/3/11 1:13 PM, Richard W.M. Jones wrote:
>>> On Mon, Sep 26, 2011 at 02:51:33PM -0500, Eric Sandeen wrote:
Another little heads up - a newer snapshot is built in rawhide now.
>
On Mon, Oct 03, 2011 at 03:10:43PM -0500, Eric Sandeen wrote:
> On 10/3/11 1:13 PM, Richard W.M. Jones wrote:
> > On Mon, Sep 26, 2011 at 02:51:33PM -0500, Eric Sandeen wrote:
> >> Another little heads up - a newer snapshot is built in rawhide now.
> >>
> >> Anyone who wants to fiddle with large ex
On 10/3/11 1:13 PM, Richard W.M. Jones wrote:
> On Mon, Sep 26, 2011 at 02:51:33PM -0500, Eric Sandeen wrote:
>> Another little heads up - a newer snapshot is built in rawhide now.
>>
>> Anyone who wants to fiddle with large ext4 filesystems, have at
>> it please!
>
> Is there any background infor
On Mon, Sep 26, 2011 at 02:51:33PM -0500, Eric Sandeen wrote:
> Another little heads up - a newer snapshot is built in rawhide now.
>
> Anyone who wants to fiddle with large ext4 filesystems, have at
> it please!
Is there any background information to this change that I can read?
I created a 2**
On 8/9/11 8:15 AM, Eric Sandeen wrote:
> ... now, finally, with more 64-bit-ness!
>
> From Ted:
>
>> I've made the first WIP release of e2fsprogs 1.42. The primary purpose
>> is for people to test the 64-bit functionality and be confident that we
>> didn't introduce any 32-bit regressions.
>
>
On 08/10/2011 07:59 AM, Rahul Sundaram wrote:
> On 08/09/2011 06:45 PM, Eric Sandeen wrote:
>>> I've made the first WIP release of e2fsprogs 1.42. The primary purpose
>>> is for people to test the 64-bit functionality and be confident that we
>>> didn't introduce any 32-bit regressions.
>> So in t
On 08/10/2011 07:59 AM, Rahul Sundaram wrote:
> On 08/09/2011 06:45 PM, Eric Sandeen wrote:
>>> I've made the first WIP release of e2fsprogs 1.42. The primary purpose
>>> is for people to test the 64-bit functionality and be confident that we
>>> didn't introduce any 32-bit regressions.
>> So in t
On 08/09/2011 06:45 PM, Eric Sandeen wrote:
>> I've made the first WIP release of e2fsprogs 1.42. The primary purpose
>> is for people to test the 64-bit functionality and be confident that we
>> didn't introduce any 32-bit regressions.
> So in theory you can at least mfks & mount a 16T fs and bey
32 matches
Mail list logo