Yes, postmark operates on the same file set. I used the following
postmark settings:
set number 30000
set transactions 4000000
set size 1500 200000
which uses a set of 30,000 files, and does a 4,000,000 transactions them
(random mix of various operations), and size between 1,500 and 200,000
bytes. BTW, I hacked my version of postmark to use unsigned ints in
various places.
I guess by having a very large filesystem (80GB), and mostly empty, the
softupdate code is able to queue an enormous amount of metadata updates
over time.
I tried forcing max_softdeps down to 50,000, and within a couple of
hours all processes accessing that filesystem hung!
Also, postmark is filesytem benchmarking and stress tester utility.
Adding fsync() would defeat the purpose a bit!
So in summary, if max_softdeps is left at the default, the system will
reboot in 24 to 36 hours. If max_softdeps is set down, filesystem access
will eventually hang within 12 hours.
On Thu, 30 Dec 1999, Matthew Dillon wrote:
> Well, in general I would not mess with max_softdeps - softupdates gets
> very inefficient if it hits its limits. I think you may have found a
> flaw in the code, though. Softupdates reschedules its vnode sync whenever
> it does something to the vnode. Postmark must be operating on the same
> set of files for very long periods of time, including truncating and
> extending them, for softupdates to get that far behind! Kirk may have
> to modify the vnode scheduling to not reschedule the vnode beyond a
> certain aggregate delay in order to ensure that things get synchronized
> in a reasonable period of time.
>
> Softupdates biggest problem are with overly-long delays in block
> reclamation - several people have commented on it. I think what you
> are seeing is a special case of this problem that causes it to be much
> worse then normal.
>
> In the mean time you have a couple of choices. You can try running
> 'sync' every so often, or you can write a small C program to fsync()
> the files postfix messes with every so often.
>
> -Matt
> Matthew Dillon
> <[EMAIL PROTECTED]>
>
> : I'm trying to find some information on reasonable settings for
> :debug.max_softdeps on a recent FreeBSD-stable system.
> :
> : It seems that if you have a machine that is able to generate disk IO
> :much faster than can be handled, has a large amount of RAM (and therefore
> :debug.max_softdeps is large), and the filesystem is very large (about
> :80GB), filesystem metadata updates can get _very_ far behind.
> :
> : For instance, on a test system running 4 instances of postmark
> :continuously for 24 hours, "df" reports that 40 GB of disk space is being
> :used, even though only about 5 GB is actually used. If I kill the
> :postmark processes, the metadata is eventually dribbled out and "df"
> :reports 5GB in use. It takes about 20 minutes for the metadata to be
> :updated on a completely ideal system.
> :
> : On this particular system, it doesn't seem to stabilize either. If the
> :4 postmark instances are allowed to run, disk usage seems to climb
> :indefinitely (at 40GB it was still climbing), until eventually the machine
> :silently reboots.
> :
> : debug.max_softdeps is by default set to 523,712 (1 GB of RAM). Is that
> :a resonable value? I see some tests in the docs with max_softdeps set to
> :4000 or so.
> :
> :
> :Tom
>
>
>
Tom
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message