Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-20 Thread Gregory Gee
Thanks. I guess I am in a 'If it ain't broken, don't fix it' for my NFS setup. Thanks, Greg -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-20 Thread Richard Elling
On Jul 20, 2010, at 6:14 PM, Gregory Gee wrote: > To further this question, I have been searching for a while and can't find > any reference to the difference and benefits between zfs sharenfs and nfs > share. Currently I am using standard NFS I believe. > > share -F nfs -o anon=0,sec=sys,rw=x

Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-20 Thread Gregory Gee
To further this question, I have been searching for a while and can't find any reference to the difference and benefits between zfs sharenfs and nfs share. Currently I am using standard NFS I believe. share -F nfs -o anon=0,sec=sys,rw=xenserver0:xenserver1 /files/VM ad...@nas:/files$ zfs list

[zfs-discuss] zfs fails to import zpool

2010-07-20 Thread Jorge Montes IV
Last week my FreeNAS server began to beep constantly so I rebooted it through the webgui. When the machine finished booting I logged back in to the webgui and I noted that my zpool (Raidz) was faulted. Most of the data on this pool is replaceable but I had some pictures on this pool that were n

Re: [zfs-discuss] Help identify failed drive

2010-07-20 Thread Linda Messerschmidt
> No, the pool tank consists of 7 physical drives(5 of Seagate and 2 of > Western Digital) See output below I think you are looking at disk label name, and this is confusing you. I had a similar thing happen where the label name from a 64GB SSD got written onto a 1TB HD. That output in format

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Haudy Kazemi
Could it somehow not be compiling 64-bit support? -- Brent Jones I thought about that but it says when it boots up that it is 64-bit, and I'm able to run 64-bit binaries. I wonder if it's compiling for the wrong processor optomization though? Maybe if it is missing some of the newer

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-20 Thread Ian Collins
On 07/21/10 03:12 AM, Richard Jahnel wrote: On the receiver /opt/csw/bin/mbuffer -m 1G -I Ostor-1:8000 | zfs recv -F e...@sunday in @ 0.0 kB/s, out @ 0.0 kB/s, 43.7 GB total, buffer 100% fullcannot receive new filesystem stream: invalid backup stream mbuffer: error: outputThread: error writin

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Garrett D'Amore
Your config makes me think this is an atypical ZFS configuration. As a result, I'm not as concerned. But I think the multithread/concurrency may be the biggest concern here. Perhaps the compilers are doing something different that causes significant cache issues. (Perhaps the compilers themsel

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Bill Sommerfeld
On 07/20/10 14:10, Marcelo H Majczak wrote: It also seems to be issuing a lot more writing to rpool, though I can't tell what. In my case it causes a lot of read contention since my rpool is a USB flash device with no cache. iostat says something like up to 10w/20r per second. Up to 137 the perfo

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Bill Sommerfeld
On 07/20/10 14:10, Marcelo H Majczak wrote: It also seems to be issuing a lot more writing to rpool, though I can't tell what. In my case it causes a lot of read contention since my rpool is a USB flash device with no cache. iostat says something like up to 10w/20r per second. Up to 137 the perfo

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Marcelo H Majczak
If I can help narrow the variables, I compiled both 137 and 144 (137 is minimum req. to build 144) using the same recommended compiler and lint, nightly options etc. 137 works fine but 144 suffer the slowness reported. System wise, I'm using only the 32bit non-debug version in an "old" single-co

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Garrett D'Amore
So the next question is, lets figure out what richlowe did differently. ;-) - Garrett ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] HELP!! SATA 6G controller for OSOL

2010-07-20 Thread valrh...@gmail.com
So I've tried both the ASUS U3S6, and the Koutech IO-PESA-A230R, recommended by the helpful blog: http://blog.zorinaq.com/?e=10 In BOTH cases, the SSD appears in the card's BIOS screen at bootup, so that the card sees it and recognizes it properly. I'm running EON 0.60 (SNV130), and once I log

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Chad Cantwell
On Tue, Jul 20, 2010 at 10:45:58AM -0700, Brent Jones wrote: > On Tue, Jul 20, 2010 at 10:29 AM, Chad Cantwell wrote: > > No, this wasn't it.  A non debug build with the same NIGHTLY_OPTIONS > > at Rich Lowe's 142 build is still very slow... > > > > On Tue, Jul 20, 2010 at 09:52:10AM -0700, Chad C

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Bob Friesenhahn
On Tue, 20 Jul 2010, Roy Sigurd Karlsbakk wrote: Mostly, yes. Traditionl RAID-5 is likely to be faster than ZFS because of ZFS doing checksumming, having the ZIL etc, but then, trad raid5 won't have the safety offered by ZFS The biggest difference is almost surely that ZFS will always const

Re: [zfs-discuss] ZFS on Ubuntu

2010-07-20 Thread Bob Friesenhahn
On Mon, 19 Jul 2010, Haudy Kazemi wrote: Yup, but that's *per release*.  Solaris (for instance) has binary compatibility and library compatibility all the way back to Solaris 2.0 in 1991. AIX and HPUX are similar.  *very* few things ever break between releases on professional UNIX systems.  Thos

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Brent Jones
On Tue, Jul 20, 2010 at 10:29 AM, Chad Cantwell wrote: > No, this wasn't it.  A non debug build with the same NIGHTLY_OPTIONS > at Rich Lowe's 142 build is still very slow... > > On Tue, Jul 20, 2010 at 09:52:10AM -0700, Chad Cantwell wrote: >> Yes, I think this might have been it.  I missed the N

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Chad Cantwell
No, this wasn't it. A non debug build with the same NIGHTLY_OPTIONS at Rich Lowe's 142 build is still very slow... On Tue, Jul 20, 2010 at 09:52:10AM -0700, Chad Cantwell wrote: > Yes, I think this might have been it. I missed the NIGHTLY_OPTIONS variable > in > opensolaris and I think it was c

Re: [zfs-discuss] ZFS on Ubuntu

2010-07-20 Thread Freddie Cash
On Mon, Jul 19, 2010 at 9:40 PM, devsk wrote: >> On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles >> wrote: >> > What supporting applications are there on Ubuntu >> for RAIDZ? >> >> None.  Ubuntu doesn't officially support ZFS. >> >> You can kind of make it work using the ZFS-FUSE >> project.  But it'

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Chad Cantwell
Yes, I think this might have been it. I missed the NIGHTLY_OPTIONS variable in opensolaris and I think it was compiling a debug build. I'm not sure what the ramifications are of this or how much slower a debug build should be, but I'm recompiling a release build now so hopefully all will be well.

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-20 Thread Richard Jahnel
I'll try an export/import and scrub of the receiving pool and see what that does. I can't take the sending pool offline to try that stuff though. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:/

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-20 Thread Richard Jahnel
On the receiver /opt/csw/bin/mbuffer -m 1G -I Ostor-1:8000 | zfs recv -F e...@sunday in @ 0.0 kB/s, out @ 0.0 kB/s, 43.7 GB total, buffer 100% fullcannot receive new filesystem stream: invalid backup stream mbuffer: error: outputThread: error writing to at offset 0xaedf6a000: Broken pipe sum

Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-20 Thread Richard Elling
On Jul 19, 2010, at 5:26 PM, Gregory Gee wrote: > I am using OpenSolaris to host VM images over NFS for XenServer. I'm looking > for tips on what parameters can be set to help optimize my ZFS pool that > holds my VM images. I am using XenServer which is running the VMs from an > NFS storage o

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Richard Elling
On Jul 20, 2010, at 3:46 AM, Roy Sigurd Karlsbakk wrote: > - Original Message - >> Hi, >> for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to >> one physical disk iops, since raidz1 is like raid5 , so is raid5 has >> same performance like raidz1? ie. random iops equal to on

Re: [zfs-discuss] How does zil work

2010-07-20 Thread Richard Elling
On Jul 20, 2010, at 3:09 AM, v wrote: > Hi, > A basic question regarding how zil works: The seminal blog on how the ZIL works is http://blogs.sun.com/perrin/entry/the_lumberjack > For asynchronous write, will zil be used? No. > For synchronous write, and if io is small, will the whole io be p

Re: [zfs-discuss] Help identify failed drive

2010-07-20 Thread marty scholes
Michael Shadle wrote: >Actually I guess my real question is why iostat hasn't logged any > errors in its counters even though the device has been bad in there > for months? One of my arrays had a drive in slot 4 fault -- lots of reset something or other errors. I cleared the errors and the po

Re: [zfs-discuss] Help identify failed drive

2010-07-20 Thread Yuri Homchuk
Well, this is a REALLY 300 users production server with 12 VM's running on it, so I definitely won't play with a firmware :) I can easily identify which drive is what by physically looking at it. It's just sad to realize that I cannot trust solaris anymore. I never noticed this problem before be

Re: [zfs-discuss] Help identify failed drive

2010-07-20 Thread Yuri Homchuk
Thanks Haudi, really appreciate your help. This is Supermicro Server. I really don't remember controller model, I set it up about 3 years ago. I just remember that I needed to reflush controller firmware to make it work in JBOD mode. I run the script you suggested: But it looks like it's still

[zfs-discuss] zpool import issue

2010-07-20 Thread Robert Hofmann
Hello. I have two Solaris 10 servers (release 10/09). The first one is a Sun M4000 Server with SPARC technology. The other one is a Sun Fire X4170 with x86 Intel architecture. Both servers are attached via SAN to the same EMC Storage system. The disks from the M4000 Server are cloned every nigh

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Roy Sigurd Karlsbakk
- Original Message - > On Jul 20, 2010, at 6:12 AM, v wrote: > > > Hi, > > for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to > > one physical disk iops, since raidz1 is like raid5 , so is raid5 has > > same performance like raidz1? ie. random iops equal to one physical

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Ross Walker
On Jul 20, 2010, at 6:12 AM, v wrote: > Hi, > for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one > physical disk iops, since raidz1 is like raid5 , so is raid5 has same > performance like raidz1? ie. random iops equal to one physical disk's ipos. On reads, no, any part of

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Roy Sigurd Karlsbakk
> I'm surprised you're even getting 400MB/s on the "fast" > configurations, with only 16 drives in a Raidz3 configuration. > To me, 16 drives in Raidz3 (single Vdev) would do about 150MB/sec, as > your "slow" speeds suggest. That'll be for random i/o. His i/o here is sequential, so the i/o is spre

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Darren J Moffat
On 20/07/2010 11:46, Roy Sigurd Karlsbakk wrote: - Original Message - Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk's i

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Roy Sigurd Karlsbakk
- Original Message - > Hi, > for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to > one physical disk iops, since raidz1 is like raid5 , so is raid5 has > same performance like raidz1? ie. random iops equal to one physical > disk's ipos. Mostly, yes. Traditionl RAID-5 is li

[zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread v
Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk's ipos. Regards Victor -- This message posted from opensolaris.org _

[zfs-discuss] How does zil work

2010-07-20 Thread v
Hi, A basic question regarding how zil works: For asynchronous write, will zil be used? For synchronous write, and if io is small, will the whole io be place on zil? or just the pointer be save into zil? what about large size io? Regards Victor -- This message posted from opensolaris.org ___

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-20 Thread Giovanni Tirloni
On Tue, Jul 20, 2010 at 12:59 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Richard Jahnel >> >> I'vw also tried mbuffer, but I get broken pipe errors part way through >> the transfer. > > The standard answer

Re: [zfs-discuss] Debunking the dedup memory myth

2010-07-20 Thread Robert Milkowski
On 20/07/2010 04:41, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Richard L. Hamilton I would imagine that if it's read-mostly, it's a win, but otherwise it costs more than it saves. Even more conventional compress

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Robert Milkowski
On 20/07/2010 07:59, Chad Cantwell wrote: I've just compiled and booted into snv_142, and I experienced the same slow dd and scrubbing as I did with my 142 and 143 compilations and with the Nexanta 3 RC2 CD. So, this would seem to indicate a build environment/process flaw rather than a regress

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Chad Cantwell
On Mon, Jul 19, 2010 at 07:01:54PM -0700, Chad Cantwell wrote: > On Tue, Jul 20, 2010 at 10:54:44AM +1000, James C. McPherson wrote: > > On 20/07/10 10:40 AM, Chad Cantwell wrote: > > >fyi, everyone, I have some more info here. in short, rich lowe's 142 works > > >correctly (fast) on my hardware,