On 2016/10/21 13:47, Pete French wrote: >> In bad case metadata of every file will be placed in random place of disk. >> ls need access to metadata of every file before start of output listing. > > Umm, are we not talkong abut an issue where the directoyr no longer contains > any files. It used to have lots, now it has none. > >> I.e. in bad case you will be need tens of thousands seeks over disk >> capable only 72 seeks per seconds. > > Why does it need to seek all over the disc if there are no files (and hence > no metadata surely) ? > > I am not bothered if a hufge directoyr takes a while to list, > thats something I am happy to deal with. What I dont like is > when it is back down to zero that it still takes a long time > to list. That doesnt make much sense.
Interesting. Is this somehow related to the old Unixy thing with directories, where the directory node would grow in size as you created more and more files or sub-directories (as you might expect), but it wouldn't shrink immediately if you simply deleted many files -- it would only shrink later when you next created a new file in that directory. This was a performance feature IIRC -- it avoided shrinking and re-growing directory nodes in quick succession for what was apparently a fairly common usage pattern of clearing out a directory and then refilling it. Can't see how that would apply to ZFS though, as the CoW nature means there should be no benefit to not immediately adjusting the size of the directory node to fit the amount of contents. Cheers, Matthew
signature.asc
Description: OpenPGP digital signature