We are preparing to run a test restore of a client file system with
nine million files using no-query restore. We are somewhat concerned
about the amount of server memory needed to list all the files and
sort them by location. How well or badly does this process behave if
the server starts swapping? The traditional in-memory sort algorithms
tend to have very large working set sizes, while some sort algorithms
designed for virtual memory environments scan large blocks of data
sequentially, resulting in much smaller working set sizes.

Reply via email to