Mike Gerdts <mger...@gmail.com> wrote: > Using cpio's -C option seems to not change the behavior for this bug, > but I did see a performance difference with the case where I hadn't > modified the zfs caching behavior. That is, the performance of the > tmpfs backed vdisk more than doubled with "cpio -o -C $((1024 * 1024)) > >/dev/null". At this point cpio was spending roughly 13% usr and 87% > sys.
As mentioned before, a lot of the user CPU time from cpio is spend to create cpio archive headers or caused by the fact that cpio archives copy the file content to unaligned archive locations while the "tar" archive format starts each new file on a modulo 512 offset in the archive. This requires a lot of unneeded copying of file data. You can of course slightly modify parameters even with cpio. I am not sure what you mean with "13% usr and 87%" as star typically spends 6% of the wall clock time in user+sys CPU where the user CPU time is typically only 1.5% of the system CPU time. In the "cached" case, it is obviously ZFS that's responsible for the slow down, regardless what cpio did in the other case. Jörg -- EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin j...@cs.tu-berlin.de (uni) joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss