Hello Andrew, Sunday, February 8, 2009, 8:46:24 PM, you wrote:
AG> Neil Perrin wrote: >> On 02/08/09 11:50, Vincent Fox wrote: >> >>> So I have read in the ZFS Wiki: >>> >>> # The minimum size of a log device is the same as the minimum size of >>> device in >>> pool, which is 64 Mbytes. The amount of in-play data that might be stored >>> on a log >>> device is relatively small. Log blocks are freed when the log transaction >>> (system call) >>> is committed. >>> # The maximum size of a log device should be approximately 1/2 the size of >>> physical >>> memory because that is the maximum amount of potential in-play data that >>> can be stored. >>> For example, if a system has 16 Gbytes of physical memory, consider a >>> maximum log device >>> size of 8 Gbytes. >>> >>> What is the downside of over-large log device? >>> >> >> - Wasted disk space. >> >> >>> Let's say I have a 3310 with 10 older 72-gig 10K RPM drives and RAIDZ2 >>> them. >>> Then I throw an entire 72-gig 15K RPM drive in as slog. >>> >>> What is behind this maximum size recommendation? >>> >> >> - Just guidance on what might be used in the most stressed environment. >> Personally I've never seen anything like the maximum used but it's >> theoretically possible. >> AG> Just thinking out loud here, but given such a disk (i.e. one which is AG> bigger than required), I might be inclined to slice it up, creating a AG> slice for the log at the outer edge of the disk. The outer edge of the AG> disk has the highest data rate, and by effectively constraining the head AG> movement to only a portion of the whole disk, average seek times should AG> be significantly improved (not to mention fewer seeks due to more AG> data/cylinder at the outer edge). The log can't be using the write AG> cache, so the normal penalty for not using the write cache when not AG> giving the whole disk to ZFS is irrelevant in this case. By allocating, AG> say, a 32GB slice from the outer edge of a 72GB disk, you should get AG> really good performance. If you turn out not to need anything like 32GB, AG> then making it smaller will make it even faster (depending how ZFS AG> allocates space on a log device, which I don't know). Obviously, don't AG> use the rest of the disk in order to achieve this performance. 1. zfs by default will end-up utilizing outer regions of disk drive so there is no point slicing a lun in this case 2. the log definitely can use cache if it is nv one. of course in such a case there is a good question if one 15k disk behind 3510 for several 10k disks does make sense at all? btw: IIRC on 3510 you need to disable cache flushes in zfs and make sure that a disk array will switch to WT mode if one of a controlers or batteries fail -- Best regards, Robert mailto:mi...@task.gda.pl http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss