Roch wrote:
Philip Brown writes:
> but there may not be filesystem space for double the data.
> Sounds like there is a need for a zfs-defragement-file utility perhaps?
>
> Or if you want to be politically cagey about naming choice, perhaps,
>
> zfs-seq-read-optimize-file ? :-)
>
Possibly or may using fcntl ?
Now the goal is to take a file with scattered blocks and order
them in contiguous chunks. So this is contigent on the
existence of regions of free contiguous disk space. This
will get more difficult as we get close to full on the
storage.
Quite so. It should be reasonable to require some minimum level of free
space on the filesystem or the operation cannot continue.
but even with some relatively 'small' amount of free space, it should be
still possible. it will just take significantly longer.
CF: Any "defrag your hard drive" algorithm. same algorithm, just applied to
a file instead of a drive.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss