Hi,
On Wed, Nov 22, 2000 at 11:28:12PM +0100, Michael Marxmeier wrote:
>
> If the files get somewhat bigger (eg. > 1G) having a bigger block
> size also greatly reduces the ext2 overhead. Especially fsync()
> used to be really bad on big file but choosing a bigger block
> size changed a lot.
2
Alan Cox wrote:
> I see higher performance with 4K block sizes. I should see higher
> latency too but have never been able to measure it. Maybe it depends
> on the file system.
> It certainly depends on the nature of requests
If the files get somewhat bigger (eg. > 1G) having a bigger block
siz
Alan Cox wrote:
>
> > It's as though the disk drivers are optimized for this case (1024). I
>
> The disk drivers are not, and they normally see merged runs of blocks so they
> will see big chunks rather than 1K then 1K then 1K etc.
>
> > behavior, but there is clearly some optimization relat
On Tue, Nov 21, 2000 at 05:06:20PM -0700, Jeff V. Merkey wrote:
>
>
> Alan Cox wrote:
> >
> > > Sirs,
> > > performing extensive tests on linux platform performance, optimized as
> > > database server, I got IMHO confusing results:
> > > in particular e2fs initialized to use 1024 block/fragment
> It's as though the disk drivers are optimized for this case (1024). I
The disk drivers are not, and they normally see merged runs of blocks so they
will see big chunks rather than 1K then 1K then 1K etc.
> behavior, but there is clearly some optimization relative to this size
> inherent in th
Alan Cox wrote:
>
> > Sirs,
> > performing extensive tests on linux platform performance, optimized as
> > database server, I got IMHO confusing results:
> > in particular e2fs initialized to use 1024 block/fragment size showed
> > significant I/O gains over 4096 block/fragment size, while I ex
> Sirs,
> performing extensive tests on linux platform performance, optimized as
> database server, I got IMHO confusing results:
> in particular e2fs initialized to use 1024 block/fragment size showed
> significant I/O gains over 4096 block/fragment size, while I expected t=
> he
> opposite. I wo
Hi
I think I have a possible explanation for your observations:
1) 1024B Block size:
> User time (seconds): 69.32
> System time (seconds): 25.15
> Percent of CPU this job got: 54%
> Elapsed (wall clock) time (h:mm:ss or m:ss): 2:54.14
> Major (requiring I/
Sirs,
performing extensive tests on linux platform performance, optimized as
database server, I got IMHO confusing results:
in particular e2fs initialized to use 1024 block/fragment size showed
significant I/O gains over 4096 block/fragment size, while I expected the
opposite. I would appreciate s
9 matches
Mail list logo