Hi,
I'm using b48 on two machines.. when I issued the following I get a
panic on the recv'ing machine:
$ zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh machine2
zfs recv -F data
doing the following caused no problems:
zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] |
> "Chad" == Chad Leigh <-- Shire.Net LLC" <[EMAIL PROTECTED]>> writes:
Chad> snoop does not show me the reply packets going back. What do I
Chad> need to do to go both ways?
It's possible that performance issues are causing snoop to miss the
replies.
If your server has multiple network in
On Sep 26, 2006, at 12:24 PM, Mike Kupfer wrote:
"Chad" == Chad Leigh <-- Shire.Net LLC" <[EMAIL PROTECTED]>> writes:
Chad> snoop does not show me the reply packets going back. What do I
Chad> need to do to go both ways?
It's possible that performance issues are causing snoop to miss the
re
I believe I have tracked down the problem discussed in the "low
disk performance thread." It seems that an alignment issue will
cause small file/block performance to be abysmal on a RAID-Z.
metaslab_ff_alloc() seems to naturally align all allocations, and
so all blocks will be aligned to asize o
Chris Csanady wrote:
I believe I have tracked down the problem discussed in the "low
disk performance thread." It seems that an alignment issue will
cause small file/block performance to be abysmal on a RAID-Z.
metaslab_ff_alloc() seems to naturally align all allocations, and
so all blocks will
I can also reproduce this on my test machines and have opened up CR
6475506 panic in dmu_recvbackup due to NULL pointer dereference
to track this problem. This is most likely due to recent changes
made in the snapshot code for -F. I'm looking into it...
thanks for testing!
Noel
On Sep 26, 2
On 9/26/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
Chris Csanady wrote:
> What I have observed with the iosnoop dtrace script is that the
> first disks aggregate the single block writes, while the last disk(s)
> are forced to do numerous writes every other sector. If you would
> like to
Thanks, Chris, for digging into this and sharing your results. These
seemingly stranded sectors are actually properly accounted for in terms
of space utilization, since they are actually unusable while maintaining
integrity in the face of a single drive failure.
The way the RAID-Z space accountin
I've found a small bug in the ZFS & Zones integration in Sol10 06/06 release.
This evening I started tweaking my configuration to make it consistent (I like
orthogonal naming standards) and hit upon this situation:
- Setup a ZFS clone as /zfspool/bluenile/cloneapps; this is a clone of my
global