Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-21 Thread Nicholas Lee
The i7 and Xeon 3300 m/b that say they have ECC support have exactly this problem as well. On Wed, Jul 22, 2009 at 4:53 PM, Nicholas Lee wrote: > > > On Tue, Jul 21, 2009 at 4:20 PM, chris wrote: > >> Thanks for your reply. >> What if I wrap the ram in a sheet of lead?

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-21 Thread Nicholas Lee
On Tue, Jul 21, 2009 at 4:20 PM, chris wrote: > Thanks for your reply. > What if I wrap the ram in a sheet of lead?;-) > (hopefully the lead itself won't be radioactive) > > I found these 4 AM3 motherboard with "optional" ECC memory support. I don't > know whether this means ECC works, or ECC

Re: [zfs-discuss] NFS, ZFS & ESX

2009-07-07 Thread Nicholas Lee
What is your NFS window size? 32kb * 120 * 7 should get you 25MB/s. Have considered getting a Intel X25-E?Going from 840 sync nfs iops to 3-5k+ iops is not overkill for SSD slog device. In fact probably cheaper to have one or two less vdevs and a single slog device. Nicholas On Tue, Jul 7,

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-25 Thread Nicholas Lee
On Fri, Jun 26, 2009 at 4:11 AM, Eric D. Mudama wrote: > True. In $ per sequential GB/s, rotating rust still wins by far. > However, your comment about all flash being slower than rotating at > sequential writes was mistaken. Even at 10x the price, if you're > working with a dataset that needs r

Re: [zfs-discuss] Speeding up resilver on x4500

2009-06-21 Thread Nicholas Lee
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson wrote: > > However, it is a bit disconcerting to have to run with reduced data > protection for an entire week. While I am certainly not going back to > UFS, it seems like it should be at least theoretically possible to do this > several orders of m

Re: [zfs-discuss] 7110 questions

2009-06-18 Thread Nicholas Lee
With XenServer 4 and NFS you had to "grow" the disks (modified manually from thin to fat) in order to get decent performance. On Fri, Jun 19, 2009 at 7:06 AM, lawrence ho wrote: > We have a 7110 on try and buy program. > > We tried using the 7110 with XEN Server 5 over iSCSI and NFS. Nothing see

Re: [zfs-discuss] 24x1TB ZFS system. Best practices for OS install without wasting space.

2009-06-01 Thread Nicholas Lee
IDE flash DOM? On Tue, Jun 2, 2009 at 8:46 AM, Ray Van Dolson wrote: > > Obviously we could throw in a couple smaller drives internally, or > elsewhere... but are there any other options here? > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-20 Thread Nicholas Lee
Not sure if this is a wacky question. Given a slog device does not really need much more than 10 GB. If I was to use a pair of X25-E (or STEC devices or whatever) in a mirror as the boot device and then either 1. created a loopback file vdev or 2. separate mirrored slice for the slog would this ca

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-19 Thread Nicholas Lee
I guess this also means the relative value of a slog is also limited by the amount memory that can be allocated to the txg. On Wed, May 20, 2009 at 4:03 PM, Eric Schrock wrote: > > Yes, that is correct. It is best to think of the ZIL and the txg sync > process as orthogonal - data goes to bot

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-19 Thread Nicholas Lee
So txg is sync to the slog device but retained in memory, and then rather than reading it back from the slog to memory it is copied to the pool from memory the copy? With the txg being a working set of the active commit, so might be a set of NFS iops? On Wed, May 20, 2009 at 3:43 PM, Eric Schrock

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-19 Thread Nicholas Lee
Does Solaris flush a slog device before it powers down? If so, removal during a shutdown cycle wouldn't lose any data. On Wed, May 20, 2009 at 7:57 AM, Dave wrote: > If you don't have mirrored slogs and the slog fails, you may lose any data > that was in a txg group waiting to be committed to

Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-04-20 Thread Nicholas Lee
I've gotten Nexenta installed onto a USB stick on a SS4200-E. To get it install required a PCI-E flex adapter. If you can reconfig EON for boot on a USB stick and serial console it might be possible. I've got two SS4200 and I might try EON on the second. Nicholas On Mon, Apr 20, 2009 at 8:39 PM,

Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-15 Thread Nicholas Lee
On Thu, Apr 16, 2009 at 12:11 PM, Nicholas Lee wrote: > > Let me see if I understand this: A SSD slog can handle, say, 5000 (4k) > transactions in a sec (20M/s) vs maybe 300 (4k) iops for a single HDD. The > slog can then batch and dump say 30s worth of transactions - 600M as >

Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-15 Thread Nicholas Lee
On Thu, Apr 16, 2009 at 11:28 AM, Richard Elling wrote: > As for space, 18GBytes is much, much larger than 99.9+% of workloads > require for slog space. Most measurements I've seen indicate that 100 > MBytes > will be quite satisfactory for most folks. Unfortunately, there is no > market > for s

Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-15 Thread Nicholas Lee
On Thu, Apr 16, 2009 at 3:32 AM, Greg Mason wrote: > > And it looks like the Intel fragmentation issue is fixed as well: >>> http://techreport.com/discussions.x/16739 >>> >> >> FYI, Intel recently had a new firmware release. IMHO, odds are that >> this will be as common as HDD firmware releases,

Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-15 Thread Nicholas Lee
On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane wrote: > > > Has anyone done any specific testing with SSD devices and solaris other > than > > the FISHWORKS stuff? Which is better for what - SLC and MLC? > My impression is that the flash controllers make a much bigger > difference than the type of

Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-15 Thread Nicholas Lee
2009/4/14 Miles Nordin > > well that's not what I meant though. The battery RAM cache's behavior > can't be determined by RTFS whether you use ZFS or not, and the > behavior matters to both ZFS users and non ZFS users. The advantage I > saw to ZFS slogs, is that you can inspect the source (and b

Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-12 Thread Nicholas Lee
On Mon, Apr 13, 2009 at 3:27 PM, Miles Nordin wrote: > >>>>> "nl" == Nicholas Lee writes: > > nl>1. Is the cache only used for RAID modes and not in JBOD >nl> mode? > > well, there are different LSI cards and firmwares and drivers,

Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-12 Thread Nicholas Lee
On Sun, Apr 12, 2009 at 7:24 PM, Miles Nordin wrote: > >nl> Supermicro have several LSI controllers. AOC-USASLP-L8i with >nl> the LSI 1068E > > That's what I'm using. It uses the proprietary mpt driver. > >nl> and AOC-USASLP-H8iR with the LSI 1078. > > I'm not using this. > >nl>

Re: [zfs-discuss] Supermicro SAS/SATA controllers?

2009-04-11 Thread Nicholas Lee
Forgot to include links. See below. Thanks. On Sat, Apr 11, 2009 at 8:35 PM, Nicholas Lee wrote: > > Supermicro have several LSI controllers. AOC-USASLP-L8i with the LSI 1068E > and AOC-USASLP-H8iR with the LSI 1078. > http://www.supermicro.com/products/accessories/addon/AOC-US

[zfs-discuss] Supermicro SAS/SATA controllers?

2009-04-11 Thread Nicholas Lee
The standard controller that has been recommended in the past is the AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several mentions of LSI based controllers on the mailing lists and I'm wondering about them. One obvious difference is that the Marvel contoller is PCI-X and the LS

Re: [zfs-discuss] Backing up ZFS snapshots

2009-02-22 Thread Nicholas Lee
On Mon, Feb 23, 2009 at 11:33 AM, Blake wrote: > I thinks that's legitimate so long as you don't change ZFS versions. > > Personally, I'm more comfortable doing a 'zfs send | zfs recv' than I > am storing the send stream itself. The problem I have with the stream > is that I may not be able to r

[zfs-discuss] SSD - slow down with age

2009-02-14 Thread Nicholas Lee
A useful article about long term use of the Intel SSD X25-M: http://www.pcper.com/article.php?aid=669 - Long-term performance analysis of Intel Mainstream SSDs. Would a zfs cache (ZIL or ARC) based on a SSD device see this kind of issue? Maybe a periodic scrub via a full disk erase would be a use

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-02-05 Thread Nicholas Lee
Is there an issue with having additional resource that at support each other? If information is well documented, then it will be easy to tell if it is out of date. Regardless does the current HCL answer the questions I posed? On Fri, Feb 6, 2009 at 2:26 PM, Richard Elling wrote: > Nicho

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-02-05 Thread Nicholas Lee
On Fri, Feb 6, 2009 at 11:29 AM, Richard Elling wrote: > > Seriously, is it so complicated that a best practice page is needed? While you might be right about that, I think there is a need for a good shared experiences site, howtos, etc. For example, I want to put a new 2U 12 disk storage syst

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-02-04 Thread Nicholas Lee
Not sure is best to put something like this. There is wikis like http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ http://wiki.genunix.org/wiki/index.php/WhiteBox_ZFSStorageServer But I haven't seen anything which has an active community like http://www.thinkwik

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-02-03 Thread Nicholas Lee
Is it possible for someone to put up a wiki page somewhere with the various SSD, ZIL, L2ARC options with Pros, Cons and Benchmarks. Especially with notes like the below. Given this is a key area of interest for zfs at the moment, seems like it would be a useful resource. On Wed, Feb 4, 2009 at 11

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-19 Thread Nicholas Lee
Another option to look at is: set zfs:zfs_nocacheflush=1 http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide Best option is to get a a fast ZIL log device. Depends on your pool as well. NFS+ZFS means zfs will wait for write completes before responding to a sync NFS write ops. I

Re: [zfs-discuss] Intel SS4200-E?

2009-01-08 Thread Nicholas Lee
I've got mine sitting on the floor at the moment. Need to find the time to try out the install. Do you know why it would not work with the DOM? I'm planning to use a spare 4GB DOM and keep the EMC one for backup if nothing works. Did you use a video card to install? On Fri, Jan 9, 2009 at 10:46 A

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Nicholas Lee
Since zfs is so smart is other areas is there a particular reason why a high water mark is not calculated and the available space not reset to this? I'd far rather have a zpool of 1000GB that said it only had 900GB but did not have corruption as it ran out of space. Nicholas __

[zfs-discuss] Intel SS4200-E?

2008-12-12 Thread Nicholas Lee
Has anyone tried runing zfs on the Intel SS4200-E [1],[[2]? Doesn't have a video port, but you could replace the IDE flash DOM with a pre-installed system. I'm interested in this as a four disk smallish (34x41x12) portable ZFS appliance. Seems that people have got it running with Linux/Openfiler

Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-11-14 Thread Nicholas Lee
On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling <[EMAIL PROTECTED]>wrote: > In short, separate logs with rotating rust may reduce sync write latency by > perhaps 2-10x on an otherwise busy system. Using write optimized SSDs > will reduce sync write latency by perhaps 10x in all cases. This is on

Re: [zfs-discuss] Bottlenecks in building a system

2007-04-18 Thread Nicholas Lee
On 4/19/07, Adam Lindsay <[EMAIL PROTECTED]> wrote: 16x hot swap SATAII hard drives (plus an internal boot drive) Tyan S2895 (K8WE) motherboard Dual GigE (integral nVidia ports) 2x Areca 8-port PCIe (8-lane) RAID drivers 2x AMD Opteron 275 CPUs (2.2GHz, dual core) 8 GiB RAM The supplier is used

Re: [zfs-discuss] zfs send/receive question

2007-04-16 Thread Nicholas Lee
On 4/17/07, Krzys <[EMAIL PROTECTED]> wrote: and when I did try to run that last command I got the following error: [16:26:00] [EMAIL PROTECTED]: /root > zfs send -i mypool/[EMAIL PROTECTED] mypool/[EMAIL PROTECTED] | zfs receive mypool2/[EMAIL PROTECTED] cannot receive: destination has been mo

Re: [zfs-discuss] Re: replication a whole zpool

2007-04-14 Thread Nicholas Lee
On 4/15/07, Chris Gerhard <[EMAIL PROTECTED]> wrote: While I would really like to see a zpool dump and zpool restore so that I could throw a whole pool to tape it is not hard to script the recursive zfs send / zfs receive. I had to when I had to recover my laptop. http://blogs.sun.com/chrisg/e

Re: [zfs-discuss] replication a whole zpool

2007-04-12 Thread Nicholas Lee
On 4/13/07, Eric Schrock <[EMAIL PROTECTED]> wrote: You want: 6421958 want recursive zfs send ('zfs send -r') Which is actively being worked on. Exactly. :D "Perhaps they all have to have the same snapnames (which will be easier with 'zfs snapshot -r')." Maybe just assume that anyone who

[zfs-discuss] replication a whole zpool

2007-04-12 Thread Nicholas Lee
Rather having to write something like: #!/bin/bash TIME=`date '+%Y-%m-%d-%H:%M:%S'` zfs snapshot -r [EMAIL PROTECTED] for i in `zfs list -H | grep $TIME | cut -f1` ; do zfs send $i | ssh ihstore zfs receive -d tank/sstore-ztank ; done That is just a first run, I'll need to add a a touch /zta

Re: Re[2]: [zfs-discuss] Benchmarking

2007-04-12 Thread Nicholas Lee
On 4/13/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: Only if you turn a compression on in ZFS. Other than that 0s are stored as any other data. There is some difference, but its marginal as the files get larger. The disks in mtank are SATA2 ES 500Gb Seagates in a Intel V5000 system. The s

Re: [zfs-discuss] File level snapshots in ZFS?

2007-03-29 Thread Nicholas Lee
On 3/30/07, Atul Vidwansa <[EMAIL PROTECTED]> wrote: Lets say I reorganized my zpools. Now there are 2 pools: Pool1: Production data, combination of binary and text files. Only few files change at a time. Average file sizes are around 1MB. Does it make sense to take zfs snapshots of the pool? Wi

Re: [zfs-discuss] File level snapshots in ZFS?

2007-03-29 Thread Nicholas Lee
On 3/30/07, Shawn Walker <[EMAIL PROTECTED]> wrote: Maybe, but they're far better at doing versioning and providing a history of changes. I;d have to agree. I track 6000 blobs (OOo gzip files, pdfs and other stuff) in svn even with 1300 changesets over 3 years there is a marginal disk cost on

Re: [zfs-discuss] File level snapshots in ZFS?

2007-03-29 Thread Nicholas Lee
On 3/30/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote: > Careful consideration of the layout of your file > system applies regardless of which type of file system it is (zfs, > ufs, etc.). True. ZFS does open up a whole new can of worms/flexibility. How do hard-links work across zfs mount/files

Re: Re[10]: [zfs-discuss] ZFS Boot support for the x86 platform

2007-03-28 Thread Nicholas Lee
On 3/29/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: BFU - just for testing I guess. I would rather propose waiting for SXCE b62. Is there a release date for this? I note that the install iso for b60 seems to only release in the last week. Nicholas

Re: Re[4]: [zfs-discuss] ZFS Boot support for the x86 platform

2007-03-28 Thread Nicholas Lee
On 3/29/07, Malachi de Ælfweald <[EMAIL PROTECTED]> wrote: Could I get your opinion then? I have just downloaded and burnt the b60 ISO. I was just getting ready to follow Tabriz and Tim's instructions from last year in order to get the ZFS root boot. Seeing the Heads Up, it says that the old me

Re: Re[2]: [zfs-discuss] ZFS Boot support for the x86 platform

2007-03-28 Thread Nicholas Lee
On 3/29/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: 1. Instructions for Manual set up: http://fs.central/projects/zfsboot/zfsboot_manual_setup.html 2. Instructions for Netisntall set up: http://fs.central/projects/zfsboot/how_to_netinstall_zfsboot I think those documents should be

Re: [zfs-discuss] Re: ZFS layout for 10 disk?

2007-03-22 Thread Nicholas Lee
On 3/23/07, John-Paul Drawneek <[EMAIL PROTECTED]> wrote: Can i do to Raidz2 over 5 and a Raidz2 over 4 with a spare for them all? or two Raidz2 over 4 with 2 spare? This is a question I was planning to ask as well. Does zfs allow a hot spare to be allocated to multiple pools or as a system

Re: [zfs-discuss] ZFS layout for 10 disk?

2007-03-22 Thread Nicholas Lee
On 3/23/07, John-Paul Drawneek <[EMAIL PROTECTED]> wrote: I've got the same consideration at the moment. Should i do 9 disk raidz2 with a spare, or could i do two raidz2 to get a bit of performance? Only done tests with striped mirrors which seems to give it a boost, so is it worth it with a r

[zfs-discuss] Acme WX22B-TR?

2007-02-26 Thread Nicholas Lee
Has anyone run Solaris on one of these: http://acmemicro.com/estore/merchant.ihtml?pid=4014&step=4 2U with 12 hotswap SATA disks. Supermicro motherboard, would have to add a second Supermicro SATA2 controller to cover all the disks and the onboard intel controller can only handle 6. Nicholas ___

Re: [zfs-discuss] Re: Are media files compressable with ZFS?

2007-02-24 Thread Nicholas Lee
On 2/25/07, Ian Collins <[EMAIL PROTECTED]> wrote: Interesting, 'cat /etc/driver_aliases | grep 373' shows nothing! Have you tried the hardware detection tool on this system? http://www.sun.com/bigadmin/hcl/hcts/device_detect.html [EMAIL PROTECTED]:~$ cat /etc/driver_aliases | grep 373 nge

Re: [zfs-discuss] Re: Are media files compressable with ZFS?

2007-02-24 Thread Nicholas Lee
Note also I have the BIOS set to AHCI mode for the SATA controllers, not IDE. Nicholas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: Are media files compressable with ZFS?

2007-02-24 Thread Nicholas Lee
Attached. Had to install xserver-xorg-core, but thanks to apt it was relatively easy. Bit of interest is probably: pci bus 0x cardnum 0x0d function 0x00: vendor 0x10de device 0x037f nVidia Corporation MCP55 SATA Controller CardVendor 0x3458 card 0xb002 (Card unknown) STATUS0x00b0 COMM

Re: [zfs-discuss] Re: Are media files compressable with ZFS?

2007-02-24 Thread Nicholas Lee
On 2/25/07, Ian Collins <[EMAIL PROTECTED]> wrote: Is the Gigabyte SATA2 controller recognised by Solaris? Nexenta v6 seems to work. Based on the Nforce 55 chipset I believe. I assume Opensolaris will work since it is based on that. I couldn't tell you if NCQ works, as Solaris is pretty new

Re: [zfs-discuss] Re: Are media files compressable with ZFS?

2007-02-24 Thread Nicholas Lee
I just build a system with Gigabyte GA-M59SLI-S5 and 6 SATA2 drives. One system Seagate ES 250Gb disk, 5 Seagate ES 500Gb disks. 2.2Tb with raidz. Seems to work well with Nexenta. I could have put 5 ES 750Gb drives instead and had another TB. All in a midi-tower with an Athlon 3800+. This mother

[zfs-discuss] zfs received vol not appearing on iscsi target list

2007-02-24 Thread Nicholas Lee
Just installed Nexenta and I've been playing around with zfs. [EMAIL PROTECTED]:/tank# uname -a SunOS hzsilo 5.11 NexentaOS_20070105 i86pc i386 i86pc Solaris [EMAIL PROTECTED]:/tank# zfs list NAME USED AVAIL REFER MOUNTPOINT home 89.5K 219G32K

Re: [zfs-discuss] Another paper

2007-02-21 Thread Nicholas Lee
On 2/22/07, Gregory Shaw <[EMAIL PROTECTED]> wrote: I was thinking of something similar to a scrub. An ongoing process seemed too intrusive. I'd envisioned a cron job similar to a scrub (or defrag) that could be run periodically to show any differences between disk performance over time.

Re: [zfs-discuss] suggestion: directory promotion to filesystem

2007-02-21 Thread Nicholas Lee
On 2/22/07, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote: and you want to move huge amount of data from /tank/foo to /tank/bar. If you use mv/tar/dump it will copy entire data. Much faster will be to 'zfs join tank tank/foo && zfs join tank tank/bar' then just mv the data and 'zfs split' them b

Re: Re[2]: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-19 Thread Nicholas Lee
On 2/19/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: 5. there's no simple answer to this question as it greatly depends on workload and data. One thing you should keep in mind - Solaris *has* to boot in a 64bit mode if you wan to use all that memory as a cache for zfs, so old x86 32bi

Re: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-19 Thread Nicholas Lee
On 2/20/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: Ah. We looked at them for some Windows DR. They do have a nice product. Just waiting for them to get iscsi and vlan support. Supposely sometime in the next couple months. Combined with zfs/iscsi it will make a very nice small data

Re: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-19 Thread Nicholas Lee
On 2/18/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: If by VI you are referring to VMware Infrastructure...you won't get any support from VMware if you're using the iSCSI target on Solaris as its not approved by them. Not that this is really a problem in my experience as VMware tech suppo

[zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-17 Thread Nicholas Lee
Is there a best practice guide for using zfs as a basic rackable small storage solution? I'm considering zfs with a 2U 12 disk Xeon based server system vs something like a second hand FAS250. Target enviroment is mixature of Xen or VI hosts via iSCSI and nfs/cifs. Being able to take snapshots o