Oliver Fromme wrote:
Guy Helmer wrote:
> I think we've finally found the cause of the problem - it wasn't just > occurring after heavy use, but was visible right after filesystem > creation! We regularly built new filesystems with "newfs -U -O 1 -b > 65536 -f 8192"

Why are you using those blocksize and fragsize settings?
(If you store large files, then you should at least also
decrease the inode density, using the -i option.)
These settings were chosen to optimize I/O throughput for Postgresql on the theory that a 64KB block size would maximize disk throughput in the general case (especially for a RAID 10 system) and an 8K frag size would match Postgresql's page size.

I wasn't aware of any known regressions in 6.x regarding large filesystem block sizes...
Some time ago, Joe Greco wrote:
 > > the one unusual thing about the configuration is that the filesystem
 > > we are attempting to build on is a 136GB ccd across 4 scsi disks with
 > > the fsize=8192 and the bsize=65536 (it is mainly to be used for large
 > > data log files):
> > FreeBSD doesn't support fsize/bsize so large. There are ongoing issues
 > within the filesystem code and VM code that will cause such filesystems
 > to break under heavy load.  Matt Dillon also talked about this being less-
 > than-optimal for the VM system from some technical points of view.

It has been a while, and I'm not sure if there are still
problems with those non-standard fsize/bsize settings, but
I would definitely try to avoid them for production use.

Best regards
   Oliver

Thanks,
Guy

_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to