[zfs-discuss] How to get nfs work with zfs?

2011-12-05 Thread darkblue
I am going to share a dir and it's subdir through NFS to Virtual Host, which include XEN(CentOS/netbsd) and ESXi,but failed, the following step is what I did: solaris 11: > zfs create tank/iso > zfs create tank/iso/linux > zfs create tank/iso/windows > > share -F nfs -o rw,nosuid,root=VM-host1:VM

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-12-05 Thread John D Groenveld
In message <4ebbfb5...@oracle.com>, Cindy Swearingen writes: >CR 7102272: > > ZFS storage pool created on a 3 TB USB 3.0 device has device label >problems > >Let us know if this is still a problem in the OS11 FCS release. I finally got upgraded from Solaris 11 Express SRU 12 to S11 FCS. Solaris

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Lachlan Mulcahy
Hi All, Just a follow up - it seems like whatever it was doing it eventually got done with and the speed picked back up again. The send/recv finally finished -- I guess I could do with a little patience :) Lachlan On Mon, Dec 5, 2011 at 10:47 AM, Lachlan Mulcahy wrote: > Hi All, > > We are cur

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-05 Thread Jim Klimov
Well, I have an intermediate data point. One scrub run completed without finding any newer errors (beside one at the pool-level and two and the raidz2-level). "Zpool clear" alone did not fix it, meaning that the pool:metadata:<0x0> was still reported as problematic, but a second attempt at "zpool

Re: [zfs-discuss] LSI 3GB HBA SAS Errors (and other misc)

2011-12-05 Thread Ryan Wehler
Whoops. Make that 9211-4i cards. :) Still promising. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] LSI 3GB HBA SAS Errors (and other misc)

2011-12-05 Thread Ryan Wehler
Here's LSIUTIL after swapping to a 6GB backplane and dual 9211-8i cards on a fresh boot. Much better. :) Adapter Phy 0: Link Up, No Errors Adapter Phy 1: Link Up, No Errors Adapter Phy 2: Link Up, No Errors Adapter Phy 3: Link Up, No Errors Adapter Phy 4: Link Down, No Errors Adapter Ph

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Lachlan Mulcahy
Hi Bob, On Mon, Dec 5, 2011 at 12:31 PM, Bob Friesenhahn < bfrie...@simple.dallas.tx.us> wrote: > On Mon, 5 Dec 2011, Lachlan Mulcahy wrote: > >> >> Anything else you suggest I'd check for faults? (Though I'm sort of >> doubting it is an issue, I'm happy to be >> thorough) >> > > Try running > >

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bob Friesenhahn
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote: Anything else you suggest I'd check for faults? (Though I'm sort of doubting it is an issue, I'm happy to be thorough) Try running fmdump -ef and see if new low-level fault events are comming in during the zfs receive. Bob -- Bob Friesenhahn b

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Lachlan Mulcahy
Hi Bob, On Mon, Dec 5, 2011 at 11:19 AM, Bob Friesenhahn < bfrie...@simple.dallas.tx.us> wrote: > On Mon, 5 Dec 2011, Lachlan Mulcahy wrote: > >> genunix`list_next ** 5822 3.7% >> unix`mach_cpu_idle** 150261 96.1% >> > >

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bill Sommerfeld
On 12/05/11 10:47, Lachlan Mulcahy wrote: > zfs`lzjb_decompress10 0.0% > unix`page_nextn31 0.0% > genunix`fsflush_do_pages 37 0.0% > zfs`dbuf_free_range

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bob Friesenhahn
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote: genunix`list_next    5822   3.7% unix`mach_cpu_idle 150261  96.1% Rather idle. Top shows:    PID USERNAME NLWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND  22945 root 

[zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Lachlan Mulcahy
Hi All, We are currently doing a zfs send/recv with mbuffer to send incremental changes across and it seems to be running quite slowly, with zfs receive the apparent bottle neck. The process itself seems to be using almost 100% of a single CPU in "sys" time. Wondering if anyone has any ideas if