I am going to share a dir and it's subdir through NFS to Virtual Host,
which include XEN(CentOS/netbsd) and ESXi,but failed, the following step is
what I did:
solaris 11:
> zfs create tank/iso
> zfs create tank/iso/linux
> zfs create tank/iso/windows
>
> share -F nfs -o rw,nosuid,root=VM-host1:VM
In message <4ebbfb5...@oracle.com>, Cindy Swearingen writes:
>CR 7102272:
>
> ZFS storage pool created on a 3 TB USB 3.0 device has device label
>problems
>
>Let us know if this is still a problem in the OS11 FCS release.
I finally got upgraded from Solaris 11 Express SRU 12 to S11 FCS.
Solaris
Hi All,
Just a follow up - it seems like whatever it was doing it eventually got
done with and the speed picked back up again. The send/recv finally
finished -- I guess I could do with a little patience :)
Lachlan
On Mon, Dec 5, 2011 at 10:47 AM, Lachlan Mulcahy wrote:
> Hi All,
>
> We are cur
Well, I have an intermediate data point. One scrub run
completed without finding any newer errors (beside one
at the pool-level and two and the raidz2-level).
"Zpool clear" alone did not fix it, meaning that the
pool:metadata:<0x0> was still reported as problematic,
but a second attempt at "zpool
Whoops. Make that 9211-4i cards. :) Still promising.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Here's LSIUTIL after swapping to a 6GB backplane and dual 9211-8i cards on
a fresh boot.
Much better. :)
Adapter Phy 0: Link Up, No Errors
Adapter Phy 1: Link Up, No Errors
Adapter Phy 2: Link Up, No Errors
Adapter Phy 3: Link Up, No Errors
Adapter Phy 4: Link Down, No Errors
Adapter Ph
Hi Bob,
On Mon, Dec 5, 2011 at 12:31 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
>
>>
>> Anything else you suggest I'd check for faults? (Though I'm sort of
>> doubting it is an issue, I'm happy to be
>> thorough)
>>
>
> Try running
>
>
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
Anything else you suggest I'd check for faults? (Though I'm sort of doubting it
is an issue, I'm happy to be
thorough)
Try running
fmdump -ef
and see if new low-level fault events are comming in during the zfs
receive.
Bob
--
Bob Friesenhahn
b
Hi Bob,
On Mon, Dec 5, 2011 at 11:19 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
>
>> genunix`list_next ** 5822 3.7%
>> unix`mach_cpu_idle** 150261 96.1%
>>
>
>
On 12/05/11 10:47, Lachlan Mulcahy wrote:
> zfs`lzjb_decompress10 0.0%
> unix`page_nextn31 0.0%
> genunix`fsflush_do_pages 37 0.0%
> zfs`dbuf_free_range
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
genunix`list_next 5822 3.7%
unix`mach_cpu_idle 150261 96.1%
Rather idle.
Top shows:
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
22945 root
Hi All,
We are currently doing a zfs send/recv with mbuffer to send incremental
changes across and it seems to be running quite slowly, with zfs receive
the apparent bottle neck.
The process itself seems to be using almost 100% of a single CPU in "sys"
time.
Wondering if anyone has any ideas if
12 matches
Mail list logo