I can now confirm that NexentaCore runs without a hitch on the N36L
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Darren J Moffat writes:
> On 11/15/10 19:36, David Magda wrote:
>
>>> Using ZFS encryption support can be as easy as this:
>>>
>>> # zfs create -o encryption=on tank/darren
>>> Enter passphrase for 'tank/darren':
>>> Enter again:
>>
>
>
>> 2. Both CCM and GCM modes of operatio
Tim,
>
> On Wed, Nov 17, 2010 at 10:12 AM, Jim Dunham wrote:
> sridhar,
>
> > I have done the following (which is required for my case)
> >
> > Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
> > created a array level snapshot of the device using "dscli" to another
> >
I've done mpxio over multiple ip links in linux using multipathd. Works just
fine. It's not part of the initiator but accomplishes the same thing.
It was a linux IET target. Need to try it here with a COMSTAR target.
-Original Message-
From: Ross Walker
Sender: zfs-discuss-boun...@op
On Nov 16, 2010, at 7:49 PM, Jim Dunham wrote:
> On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
>> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
>>
>> For iSCSI one just needs to have a second (third or fourth...) iSCSI session
On Wed, 17 Nov 2010, LEES, Cooper wrote:
Zfs Gods,
I have been approved to buy 2 x F20 PCIe cards for my x4540 to
increase our IOPs and I was wondering what would be the most benefit
to gain extra IOPs (both reading and writing) on my zpool.
To clarify, adding a dedicated intent log (slog)
On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
>
> For iSCSI one just needs to have a second (third or fourth...) iSCSI session
> on a different IP to the target and run mpio/mpxio/m
On Wed, Nov 17, 2010 at 10:12 AM, Jim Dunham wrote:
> sridhar,
>
> > I have done the following (which is required for my case)
> >
> > Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
> > created a array level snapshot of the device using "dscli" to another
> device which i
On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>
>
> On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin wrote:
> > "tc" == Tim Cook writes:
>
>tc> Channeling Ethernet will not make it any faster. Each
>tc> individual connection will be limited to 1gbit. iSCSI with
>tc> mpxio may wo
sridhar,
> I have done the following (which is required for my case)
>
> Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
> created a array level snapshot of the device using "dscli" to another device
> which is successful.
> Now I make the snapshot device visible to anot
Zfs Gods,
I have been approved to buy 2 x F20 PCIe cards for my x4540 to increase our
IOPs and I was wondering what would be the most benefit to gain extra IOPs
(both reading and writing) on my zpool.
Currently I have to following storage zpool, called cesspool:
pool: cesspool
state: ONLINE
Does OpenSolaris/Solaris11 Express have a driver for it already?
Anyone used one already?
-Kyle
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin wrote:
> > "tc" == Tim Cook writes:
>
>tc> Channeling Ethernet will not make it any faster. Each
>tc> individual connection will be limited to 1gbit. iSCSI with
>tc> mpxio may work, nfs will not.
>
> well...probably you will run into
Ummm… there's a difference between data integrity and data corruption.
Integrity is enforced programmatically by something like a DBMS. This sets up
basic rules that ensure the programmer, program or algorithm adhere to a level
of sanity and bounds.
Corruption is where cosmic rays, bit rot, ma
> "tc" == Tim Cook writes:
tc> Channeling Ethernet will not make it any faster. Each
tc> individual connection will be limited to 1gbit. iSCSI with
tc> mpxio may work, nfs will not.
well...probably you will run into this problem, but it's not
necessarily totally unsolved.
I am
Hi. I runned into that damn problem too. And after days of searching I finally
found this software: Delete Long Path File Tool.
It's GREAT. You can find it here: http://www.deletelongfile.com";>www.deletelongfile.com
--
This message posted from opensolaris.org
___
On 11/17/10 05:45 AM, Cindy Swearingen wrote:
Hi Ian,
The pool and file system version information is available in
the ZFS Administration Guide, here:
http://docs.sun.com/app/docs/doc/821-1448/appendixa-1?l=en&a=view
The OpenSolaris version pages are up-to-date now also.
Thanks Cindy!
--
Ia
Hi,
I have done the following (which is required for my case)
Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
created a array level snapshot of the device using "dscli" to another device
which is successful.
Now I make the snapshot device visible to another host (host2)
Hi Ian,
The pool and file system version information is available in
the ZFS Administration Guide, here:
http://docs.sun.com/app/docs/doc/821-1448/appendixa-1?l=en&a=view
The OpenSolaris version pages are up-to-date now also.
Thanks,
Cindy
On 11/15/10 16:42, Ian Collins wrote:
Is there an u
On Nov 15, 2010, at 14:36, David Magda wrote:
Looking forwarding to playing with it. Some questions:
1. Is it possible to do a 'zfs create -o encryption=off
tank/darren/music' after the above command? I don't much care if my
MP3s
are encrypted. :)
2. Both CCM and GCM modes of operation are s
On 11/15/10 19:36, David Magda wrote:
On Mon, November 15, 2010 14:14, Darren J Moffat wrote:
Today Oracle Solaris 11 Express was released and is available for
download[1], this release includes on disk encryption support for ZFS.
Using ZFS encryption support can be as easy as this:
# zf
Actually, I did this very thing a couple of years ago with M9000s and EMC DMX4s
... with the exception of the "same host" requirement you have (i.e. the thing
that requires the GUID change).
If you want to import the pool back into the host where the cloned pool is also
imported, it's not just
On Nov 15, 2010, at 8:48 AM, Frank wrote:
> I am a newbie on Solaris.
> We recently purchased a Sun Sparc M3000 server. It comes with 2 identical
> hard drives. I want to setup a raid 1. After searching on google, I found
> that the hardware raid was not working with M3000. So I am here to look
Measure the I/O performance with iostat. You should see something that
looks sorta like (iostat -zxCn 10):
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
5948.9 349.3 40322.3 5238.1 0.1 16.70.02.7 0 330 c
comment below...
On Nov 15, 2010, at 4:21 PM, Matt Banks wrote:
>
> On Nov 15, 2010, at 4:15 PM, Erik Trimble wrote:
>
>> On 11/15/2010 2:55 PM, Matt Banks wrote:
>>> I asked this on the x86 mailing list (and got a "it should work" answer),
>>> but this is probably more of the appropriate place
On Nov 15, 2010, at 2:11 AM, sridhar surampudi wrote:
> Hi I am looking in similar lines,
>
> my requirement is
>
> 1. create a zpool on one or many devices ( LUNs ) from an array ( array can
> be IBM or HPEVA or EMC etc.. not SS7000).
> 2. Create file systems on zpool
> 3. Once file systems ar
26 matches
Mail list logo