2012-08-03 17:18, Justin Stringfellow пишет:
While this isn't causing me any problems, I'm curious as to why this is
happening...:
$ dd if=/dev/random of=ob bs=128k count=1 && while true
do
ls -s ob
sleep 1
done
0+1 records in
0+1 records out
1 ob
...
I was expecting the '1', since this is a zfs with recordsize=128k. Not sure I
understand the '4', or why it happens ~30s later. Can anyone distribute clue
in my direction?
s10u10, running 144488-06 KU. zfs is v4, pool is v22.
I think for the cleanness of the experiment, you should also include
"sync" after the dd's, to actually commit your file to the pool.
What is the pool's redundancy setting?
I am not sure what "ls -s" actually accounts for file's FS-block
usage, but I wonder if it might include metadata (relevant pieces of
the block pointer tree individual to the file). Also check if the
disk usage reported by "du -k ob" varies similarly, for the fun of it?
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss