On Tue, Oct 09, 2012 at 05:59:06PM -0700, Chuck Silvers wrote: > > [...] > > with a 'cat big_file > /dev/null' > > writes are still limited to 64k ... > > I would hope that cat'ing a file to /dev/null wouldn't result in any writes. > :-) > I assume you meant 'cat big_file > other_file' ?
I use: dd if=/dev/zero of=bigfile bs=1g count=7 > > if so, then the reason for the 64k writes would be this block of code in > ffs_write(): > > if (!async && oldoff >> 16 != uio->uio_offset >> 16) { > mutex_enter(vp->v_interlock); > error = VOP_PUTPAGES(vp, (oldoff >> 16) << 16, > (uio->uio_offset >> 16) << 16, > PGO_CLEANIT | PGO_JOURNALLOCKED | PGO_LAZY); > if (error) > break; > } > that's it. I did s/16/32/g in the code above and now I get 128k writes. > > there's a similar block in many file systems at this point. when I wrote that > I intended to replace it with something better before very long, but I just > never > got back to it, alas. I'm not sure what the best way to handle this would be. If we assume that maxphys is a power of 2, we could use a maxphys-derived mask here. Otherwise, maybe we should compute and cache the largest power-of-2 value below maxphys in v_mount, as is done in vfs_vnops.c:vn_ra_allocctx() (actually, this would remove ra_iochunk as we could use the mount point's value) -- Manuel Bouyer <bou...@antioche.eu.org> NetBSD: 26 ans d'experience feront toujours la difference --