I have removed all L2arc devices as a precaution. Has anyone seen this
error with no L2arc device configured?
On Thu, Dec 13, 2012 at 9:03 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 12 Dec 2012, Jamie Krier wrote:
>
>>
>>
>> I am thinking about switching to an Illumos
On 12/14/12 10:07 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
wrote:
Is that right? You can't use zfs send | zfs receive to send from a newer
version and receive on an older version?
No. You can, with recv, override any property in the sending stream that can be
set from t
On Fri, Dec 14 at 9:29, Fred Liu wrote:
We have found mbuffer to be the fastest solution. Our rates for large
transfers on 10GbE are:
280MB/smbuffer
220MB/srsh
180MB/sHPN-ssh unencrypted
60MB/s standard ssh
The tradeoff mbuffer is a little more complicated to script; rs
In my own experiments with my own equivalent of mbuffer, it's well worth
giving the receiving side a buffer which is sized to hold the amount of
data in a transaction commit, which allows ZFS to be banging out one tx
group to disk, whilst the network is bringing the next one across for
it. This
>
> We have found mbuffer to be the fastest solution. Our rates for large
> transfers on 10GbE are:
>
> 280MB/smbuffer
> 220MB/srsh
> 180MB/sHPN-ssh unencrypted
> 60MB/s standard ssh
>
> The tradeoff mbuffer is a little more complicated to script; rsh is,
> well, you know;
Post in the list.
> -Original Message-
> From: Fred Liu
> Sent: 星期五, 十二月 14, 2012 23:41
> To: 'real-men-dont-cl...@gmx.net'
> Subject: RE: [zfs-discuss] any more efficient way to transfer snapshot
> between two hosts than ssh tunnel?
>
>
>
> >
> > Hi Fred,
> >
> > I played with zfs send
>
> I've heard you could, but I've never done it. Sorry I'm not much help,
> except as a cheer leader. You can do it! I think you can! Don't give
> up! heheheheh
> Please post back whatever you find, or if you have to figure it out for
> yourself, then blog about it and post that.
Aha! Gotc
We have found mbuffer to be the fastest solution. Our rates for large
transfers on 10GbE are:
280MB/smbuffer
220MB/srsh
180MB/sHPN-ssh unencrypted
60MB/s standard ssh
The tradeoff mbuffer is a little more complicated to script; rsh is, well,
you know; and hpn-ssh requires
Hey Sol,
Can you send me the core file, please?
I would like to file a bug for this problem.
Thanks, Cindy
On 12/14/12 02:21, sol wrote:
Here it is:
# pstack core.format1
core 'core.format1' of 3351: format
- lwp# 1 / thread# 1
0806de73 can_efi_disk_be_ex
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Netherton
>
> At this point, the only thing would be to use 11.1 to create a new pool at
> 151's
> version (-o version=) and top level dataset (-O version=). Recreate the file
> system h
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of sol
>
> I added a 3TB Seagate disk (ST3000DM001) and ran the 'format' command but
> it crashed and dumped core.
>
> However the zpool 'create' command managed to create a pool on the whole
> d
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fred Liu
>
> BTW, anyone played NDMP in solaris? Or is it feasible to transfer snapshot via
> NDMP protocol?
I've heard you could, but I've never done it. Sorry I'm not much help, except
as
On 13 December, 2012 - Jan Owoc sent me these 1,0K bytes:
> Hi,
>
> On Thu, Dec 13, 2012 at 9:14 AM, sol wrote:
> > Hi
> >
> > I've just tried to use illumos (151a5) import a pool created on solaris
> > (11.1) but it failed with an error about the pool being incompatible.
> >
> > Are we now at
Here it is:
# pstack core.format1
core 'core.format1' of 3351: format
- lwp# 1 / thread# 1
0806de73 can_efi_disk_be_expanded (0, 1, 0, ) + 7
08066a0e init_globals (8778708, 0, f416c338, 8068a38) + 4c2
08068a41 c_disk (4, 806f250, 0, 0, 0, 0) +
14 matches
Mail list logo