> Pawel Jakub Dawidek wrote:
> > This is what I see on Solaris (hole is 4GB):
> > 
> >     # /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
> >     real       23.7
> >     # /usr/bin/time dd if=/zfs/hole of=/dev/null bs=128k
> >     real       21.2
> > 
> >     # /usr/bin/time dd if=/ufs/hole of=/dev/null bs=4k
> >     real       31.4
> >     # /usr/bin/time dd if=/zfs/hole of=/dev/null bs=4k
> >     real     7:32.2
> 
> This is probably because the time to execute this on ZFS is dominated by 
> per-systemcall costs, rather than per-byte costs.  You are doing 32x more 
> system calls with the 4k blocksize, and it is taking 20x longer.
> 
> That said, I could be wrong, and yowtch, that's much slower than I'd like!

You missed my earlier post where I showed accessing a hole
file takes much longer than accessing a regular data file for
blocksize of 4k and below.  I will repeat the most dramatic
difference:

                      ZFS            UFS2
                Elapsed System  Elapsed System 
md5 SPACY       210.01   77.46  337.51   25.54
md5 HOLEY       856.39  801.21   82.11   28.31

I used md5 because all but a couple of syscalls are for
reading the file (with a buffer of 1K).  dd would make
an equal number of calls for writing.

For both file systems and both cases the filesize is the same
but SPACY has 10GB allocated while HOLEY was created with
truncate -s 10G HOLEY.

Look at the system times.  On UFS2 system time is a little
bit more for the HOLEY case because it has to clear a block.
ON ZFS it is over 10 times more!  Something is very wrong.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to