On Wed, 24 Apr 2013 19:07:05 +0100, Stroller wrote: > > That only works on small systems. I have systems here where a 'du' on > > /home would take hours and produce massive IO wait, because there's so > > much data in there. > > Of course. Excuse me. > > My original idea was in respect of the previous respondent's desire to > offer hard limits of a gigabyte - allocating each user a partition and > running `du`, which returns immediately, on it.
I said "by the gigabyte" not "of a gigabyte", a user could have hundreds of them. > I don't understand how a hard limit could be enforced if it's > impractical to assess the size of used data. Because the filesystem keeps track of the usage, just like it does for the whole filesystem, which is why "df ." is so much faster than "du .". ZFS does this too, it just doesn't have a concept of a soft limit. -- Neil Bothwick Please rotate your phone 90 degrees and try again.
signature.asc
Description: PGP signature