Supporting large directories is fine. Adding new tools which most people will never have a need for is not, and adding flags to ls every time we think of a new use case is how GNU ended up with their mess. An optimized "ls -U" is supporting large directories. A tool to count them is a special purpose requirement.
On Mon, Jul 22, 2013 at 2:44 PM, Calvin Morrison <mutanttur...@gmail.com> wrote: > On 22 July 2013 17:41, Chris Down <ch...@regentmarkets.com> wrote: >> On 22 July 2013 23:27, Calvin Morrison <mutanttur...@gmail.com> wrote: >>> This set command is simple, but still takes a long time, because the shell >>> spends a long time doing the globbing of the * >> >> In any case that it matters, you are doing filesystem structuring wrong. >> > > Why? Why is it ridiculous to want to be able to support medium sized > file directories, for example thousands of frames of a video, DNA > sequencing files and others I often have are in large sets of files, > and don't have any sub division that is logical other than numerically > creating subdirectories. > > I think your thinking is wrong. In 2013, why can't we support a > directory that responds reasonably fast with a large amount of > directories? >