On Thu, Jan 21, 2016 at 10:44 AM, Achim Gratz <strom...@nexgo.de> wrote:
> I am finding a large performance gap between plain "ls" and "ls -F" in a
> directory with many files on a network share (NetApp disguised as NTFS if
> that matters).  This has been there for quite a while, I've just now
> realized what the reason was (I have "ls -F" as an alias for "ls" in my
> interactive shells).  In a directory with 1300 files, a plain "ls" completes
> in 0.3s, while "ls -F" requires about 95s.  Determining the file class seems
> to require around 70...90ms per file, which I can confirm also for
> directories with a lot less files.  What's involved in that determination
> that takes such a long time?

The overhead appears to be in checking for executable files; using
--file-type instead of -F, which just omits the '*' category, reduces
the time for ls in one of my (local) large directories from over one
second to 0.04 seconds.

-- 
William M. (Mike) Miller | Edison Design Group
william.m.mil...@gmail.com

--
Problem reports:       http://cygwin.com/problems.html
FAQ:                   http://cygwin.com/faq/
Documentation:         http://cygwin.com/docs.html
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple

Reply via email to