Ivan Voras wrote:
Ivan Voras wrote:
* Are the issues on the list still there?
* Are there any new issues?
* Is somebody running ZFS in production (non-trivial loads) with
success? What architecture / RAM / load / applications used?
* How is your memory load? (does it leave enough memory for other services)
also: what configuration (RAIDZ, mirror, etc.?)
I have two production servers with ZFS.
First is HP ProLiant ML110 G5 with 4x 1TB SATA Samsung drives in RAIDZ +
5GB RAM running
FreeBSD 7.1-RELEASE amd64 (GENERIC)
Root is on USB flash drive with UFS, main storage is on ZFS.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 1.34T 1.33T 29.9K /tank
tank/system 1.04G 1.33T 28.4K /tank/system
tank/system/tmp 52.4K 1.33T 52.4K /tmp
tank/system/usr 611M 1.33T 95.0K /tank/system/usr
tank/system/usr/obj 26.9K 1.33T 26.9K /usr/obj
tank/system/usr/ports 443M 1.33T 214M /usr/ports
tank/system/usr/ports/distfiles 105M 1.33T 105M /usr/ports/distfiles
tank/system/usr/ports/packages 123M 1.33T 123M /usr/ports/packages
tank/system/usr/src 168M 1.33T 168M /usr/src
tank/system/var 459M 1.33T 213K /var
tank/system/var/db 421M 1.33T 420M /var/db
tank/system/var/db/pkg 387K 1.33T 387K /var/db/pkg
tank/system/var/log 37.8M 1.33T 37.8M /var/log
tank/system/var/run 60.6K 1.33T 60.6K /var/run
tank/vol0 1.34T 1.33T 1.34T /vol0
This server is storage for backups made every night by rsync from 10
FreeBSD machines.
It takes about 1 hour every night to do rsync backup of all machines.
Rsync is used for "snapshots" with --link-dest= (each day has own
directory and all unchenged files are hardlinked to previous day and I
have history of two month back).
Backups are stored on /vol0 with compression enabled. (compression is
enabled on /usr/ports, /usr/src, /var/db/pkg, /vol0)
# df -hi /vol0/
Filesystem Size Used Avail Capacity iused ifree %iused
Mounted on
tank/vol0 2.7T 1.3T 1.3T 50% 17939375 11172232 62% /vol0
This backup server is in service from october 2008 with just one panic
(kmem related). After proper loader.conf tuning it is working well.
# cat /boot/loader.conf
## ZFS tuning
vm.kmem_size="1280M"
vm.kmem_size_max="1280M"
vfs.zfs.prefetch_disable="1"
vfs.zfs.arc_min="16M"
vfs.zfs.arc_max="128M"
up 80+04:52:10 23:33:40
28 processes: 1 running, 27 sleeping
CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
Mem: 17M Active, 26M Inact, 1541M Wired, 328K Cache, 204M Buf, 3234M Free
The second server with ZFS is used as Jails hosting. It is buil on Sun
Fire X2100 M2 with 2x 500GB SATA drives + 4GB RAM running
FreeBSD 7.1-STABLE amd64
from Wed Feb 11 09:56:08 CET 2009 (GENERIC kernel)
The hard drives are splitted to two slices, where first slice is used
for gmirrored system (about 20GB) and the rest is used for ZFS mirror.
There are 5 Jails running. One with postfix as backup MX and BIND as
slave DNS for little webhosting company. Few jails for webdevelopers
(not heavily loaded), but last jail has 240GB of audio files for
streaming throught Lighttpd with speed about 30Mbps.
There are some issues with Lighttpd in conjunction with ZFS - after
about 30-60 minutes, Lighttpd decrease the speed to less than 7Mbps
until it is restarted, then everything works well. The server has uptime
56 days without any panic or other stability issues.
Another box with Lighttpd + UFS (not in jail) is serving same files by
65Mbps without problem.
(Lighttpd problem may be related to jail instead of ZFS - I did not test
it yet)
All jails are on compressed filesystems and there are some snapshots of
each jail.
# cat /boot/loader.conf
## gmirror RAID1
geom_mirror_load="YES"
## ZFS tuning
vm.kmem_size="1280M"
vm.kmem_size_max="1280M"
kern.maxvnodes="400000"
vfs.zfs.prefetch_disable="1"
vfs.zfs.arc_min="16M"
vfs.zfs.arc_max="128M"
Miroslav Lachman
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"