On Tuesday 03 March 2015 20:29:53 Richard Hector wrote:
> Hi all,
> 
> I have an issue with a (client's) large (13T) filesystem, that fills
> up every now and then and nobody's quite sure what's doing it.
> 
> I can run du, but that takes ages, and has a performance impact. df
> only gives the total for the filesystem, of course.
> 
> Currently I'm running find occasionally, with fprintf to record
> filename, mtime and size, then analysing it (by importing it into
> postgres, fwiw) for new large files - but ideally I'd like to zero in
> by frequently checking sizes of whole directories. Is there any way to
> do that, perhaps by triggering off write calls, cheaply?
> 
> I know that inotify/incron have their limitations when working with
> deep directory structures; I'd be interested to know of anything that
> can trigger on any writes to a particular filesystem.
> 
> If I could start again, I'd put LVM on the array and use multiple LVs
> to allow du to work at lower levels, but that's not really practical
> at this stage.
> 
> Any tips?

Have a look at agedu:

http://www.chiark.greenend.org.uk/~sgtatham/agedu/

It computes disk usage like du.

The produced HTML report can be viewed interactively like ncdu.

But, in addition, you can view the HTML report from another machine (using the 
agedu webserver) or on another computer if you copy the agedu.dat file to 
another computer and start the web server there.

As the report distinguishes new from old files, you can spot were the most 
recently written big files are.

Frederic


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/2983027.ioe2yqq...@fmarchal.edpnet.be

Reply via email to