On 07/17/2013 06:13 PM, Calvin Morrison wrote:
I rely heavily on unioned directories (go plan9!) so mine tend to get
large when working with many large datasets.

Does anyone use large directories often?
Unices avoid large directories to scalability problems like the one you're trying to solve. Many programs and scripts are infrequently tested on large directories, in part because they're rare. Even the GNU coreutils test suite does not test against huge directories unless explicitly allowed to use tens of thousands of inodes. Which is at least once for every release candidate on every supported system.

[OT]
IIRC, an Ext4 explicitly does not care about ridiculously huge directories because common functions and syscalls will get mildly surprisingly slow anyway. But I haven't tested anything myself.

If you can make such an interface that performance problems can be implemented under hood, that would be terrific. But a dedicated utility for such a specific function that it's hardly going to be used in scripts has no place in moreutils. It would be nice to keep around somewhere, though. Package it with other file tree related utilities.

Reply via email to