Hey

Anyone on here who is knowledgeable about disk (free space) fragmentation (not 
just opinionated ;)) ?

HFS+ is supposed to contain algorithms that limit file fragmentation, but 
without a background process that moves files (or file blocks), it cannot 
prevent free space fragmentation, just limit it. On a spinning disks that can 
become a limit on performance (I presume that theoretically the same applies to 
SSDs too) and any process that requires contiguous files will ultimately fail 
if those cannot be obtained, regardless the underlying medium if it doesn't 
take that aspect into account.

One way free disk space can become fragmented is when installing files in 
presence of a significant amount of temporary files, like a ${build.dir}. 
Example: even without the QtWebEngine component, Qt5's build directory takes up 
about 6Gb when built with LTO (that same option will *decrease* the installed 
footprint by a few percent). However, that same build directory decreases by 
about 70% after running afsctool on it (if it weren't for a single static 
library that's over 3Gb...) 

I've added a post-build block that runs afsctool on ${build.dir} in some of my 
ports (a parallelised version of afsctool I developed).

The question I'd like to raise is what the effect of that operation would be 
when done systematically. The idea is of course to reduce port disk space usage 
before creating the destroot directory. However, afsctool compresses copies of 
files, for safety, so could actually be adding to fragmentation (esp. if run in 
on multiple files in parallel).
Any thoughts on this, regardless of whether free disk space fragmentation is a 
real-world issue or not?

Thanks,
René
_______________________________________________
macports-users mailing list
macports-users@lists.macosforge.org
https://lists.macosforge.org/mailman/listinfo/macports-users

Reply via email to