I found it, it was pflogd who was filling the root. Strange thing is that
/var is on separate partition from / and acording to man

FILES
     /var/run/pflogd.pid  Process ID of the currently running pflogd.
     /var/log/pflog       Default log file.

can I chroot pflogd to /var and is that a good idea? Is this and incident
due to problem with my pf.conf which is:

set skip on lo0
block in all
block out all
block in quick log on nfe0 proto tcp flags FUP/WEUAPRSF
block in quick log on nfe0 proto tcp flags WEUAPRSF/WEUAPRSF
block in quick log on nfe0 proto tcp flags SRAFU/WEUAPRSF
block in quick log on nfe0 proto tcp flags /WEUAPRSF
block in quick log on nfe0 proto tcp flags SR/SR
block in quick log on nfe0 proto tcp flags SF/SF
antispoof log quick for lo0
antispoof log quick for nfe0
antispoof log quick for fxp0
set loginterface nfe0
set loginterface fxp0

# pass out
pass out quick log on nfe0 proto { icmp, tcp, udp } from IP to any
pass out quick log on fxp0 proto { icmp, tcp, udp } from 10.168.2.3 to
10.168.2.4
pass out quick log on fxp0 proto { icmp, tcp, udp } from 10.168.2.3 to
10.168.2.5

# httpd on external net
pass in on nfe0 proto tcp from any to nfe0 port 80 flags S/SA synproxy state
(source-track rule, max-src-conn-rate 150/10, max-src-states 500,
max-src-nodes 4000000)

# SSH on internal net
pass in log on fxp0 proto tcp from 10.168.2.4 to fxp0 port someport
pass in log on fxp0 proto tcp from 10.168.2.5 to fxp0 port someport

# Samba on local net
pass in log on fxp0 proto tcp from 10.168.2.4 to fxp0 port 139
pass in log on fxp0 proto tcp from 10.168.2.5 to fxp0 port 139

I didn`t do any configuration of pflog.

2009/12/19 Bret S. Lambert <bret.lamb...@gmail.com>

> On Sat, Dec 19, 2009 at 12:33:00PM +0200, Daniel Zhelev wrote:
> > Well, that was a good idea thanks for it, but no luck. I`ve killed and
> start
> > again every process listed in fstat but the amount of
> > used space has not drop. I`ve forgot to mention that I use 4.6-stable
> with
> > GENERIC kernel.
>
> Then start snapshotting via du -s k <whatever is on the partition>
>
> Rinse, lather, and repeat until you find out what the actual thing
> that keeps growing is.
>
> Had to do this myself earlier this week, as someone had
> a) decided that logging to /root/somefile was a good idea
> b) decided that logging on verbose was a good idea
>
> >
> > /dev/wd0a     1005M    274M    680M    29%    /
> >
> >
> >
> > 2009/12/18 Joachim Schipper <joac...@joachimschipper.nl>
> >
> > > On Fri, Dec 18, 2009 at 10:36:46AM +0200, Daniel Zhelev wrote:
> > > > Hello list.
> > > > I`ve set up a little bash script to tell me when some file system is
> > > > over 95% full and after a month I got a mail about my root file
> system
> > > > ( / ) after log in I sow that the root file system is over 100%. That
> > > > is fine I tried to do a search for big and nasty files and so on but
> > > > after a hour magically the file system was at 20%.  That got me very
> > > > worried about any security issue, but nothing was missing and so on.
> > > > The issue is that the file system continues to grow about a 2
> presents
> > > > a day, which is strange.
> > >
> > > > Here is some output:
> > > >
> > > > Filesystem     Size    Used   Avail Capacity  Mounted on
> > > > /dev/wd0a     1005M    251M    704M    26%    /
> > > > /dev/wd0k     46.7G   26.0K   44.4G     0%    /home
> > > > /dev/wd0d      3.9G    8.0K    3.7G     0%    /tmp
> > > > /dev/wd0f      2.0G    615M    1.3G    32%    /usr
> > > > /dev/wd0g     1005M    145M    809M    15%    /usr/X11R6
> > > > /dev/wd0h      5.4G    206M    5.0G     4%    /usr/local
> > > > /dev/wd0i      2.0G    619M    1.3G    32%    /usr/src
> > > > /dev/wd0e      8.9G    585M    7.8G     7%    /var
> > > > /dev/wd0j      2.0G    961M    951M    50%    /usr/obj
> > > > /dev/wd1a      295G    562M    280G     0%    /storage/storages
> > > > /dev/wd1b      110G   20.7G   83.7G    20%    /storage/windows
> > > >
> > > > r...@sgate:/root# find / -xdev -size +1000 -type f | xargs ls -laSh
> > > > -rwxr-xr-x  1 root  wheel   6.9M Nov 25 16:39 /bsd
> > > > -rw-r--r--  1 root  wheel   6.9M Nov 25 14:16 /obsd
> > > > -rw-r--r--  1 root  wheel   5.8M Nov 25 14:16 /bsd.rd
> > > > -r-xr-xr-x  1 root  bin     1.2M Dec  7 15:05 /sbin/isakmpd
> > > > -r--r--r--  1 root  bin     526K Dec  7 15:06 /etc/magic
> > > >
> > > > r...@sgate:/root# find / -xdev -mtime -1 -type f | xargs ls -laSh
> > > > -rw-------  1 root  wheel   2.0K Dec 18 03:09 /etc/pf.conf
> > > > -rw-r--r--  1 root  wheel   507B Dec 18 03:08 /etc/hosts
> > > > -rw-r--r--  1 root  wheel     0B Dec 18 02:49 /etc/resolv.conf
> > >
> > > > The other strange thing is that I`ve set up the /etc/daily root
> backup
> > > > and here is the compare between two discs:
> > > >
> > > > /dev/wd1d     1005M   42.2M    912M     4%    /altroot
> > > > /dev/wd0a     1005M    251M    704M    26%    /
> > > >
> > > > since /altroot is exact dd copy of / isn`t they at the same size?
> > >
> > > It's quite possible that some process is holding open a file descriptor
> > > to a file which has no links from the filesystem. To see this, run 'vi
> > > bigfile', suspend, and run 'rm bigfile'. The space is still used. Then
> > > quit vi, and optionally run 'sync', and you'll see the space has been
> > > reclaimed.
> > >
> > > To see which process is the culprit, try fstat.
> > >
> > > (Note that this is only one possibility!)
> > >
> > >                Joachim

Reply via email to