my sys or
>>> the server I am trying to attach the disk to.
>>
>> Did you try to do as Jim Dunham said?
>>
>> zpool create test_pool c5t0d0p0
>> zpool destroy test_pool
>> format -e c5t0d0p0
>> partition
>>
Kitty,
> I am trying to mount a WD 2.5TB external drive (was IFS:NTFS) to my OSS box.
>
> After connecting it to my Ultra24, I ran "pfexec fdisk /dev/rdsk/c5t0d0p0" and
> changed the Type to EFI. Then, "format -e" or "format" showed the disk was
> config
> with 291.10GB only.
The following mess
Don,
> Is it possible to modify the GUID associated with a ZFS volume imported into
> STMF?
>
> To clarify- I have a ZFS volume I have imported into STMF and export via
> iscsi. I have a number of snapshots of this volume. I need to temporarily go
> back to an older snapshot without removing a
Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Brandon High
>>
>> Write caching will be disabled on devices that use slices. It can be
>> turned back on by using format -e
>
> My experience has been, despite wha
Roy,
> Sorry for crossposting, but I'm not really sure where this question belongs.
>
> I'm trying to troubleshoot a connection from an s10 box to a SANRAD iSCSI
> concentrator. After some network issues on the switch, the s10 box seems to
> lose iSCSI connection to the SANRAD box. The error me
Hi Janice,
> Hello. I am looking to see if performance data exists for on-disk dedup. I
> am currently in the process of setting up some tests based on input from
> Roch, but before I get started, thought I'd ask here.
I find it somewhat interesting that you are asking this question on behalf
Roy,
> Hi all
>
> There was some discussion on #opensolaris recently about L2ARC being
> dedicated to a pool, or shared. I figured since it's associated with a pool,
> it must be local, but I really don't know.
An L2ARC is made up of one or more "Cache Devices" associated with a single ZFS
st
Tim,
>
> On Wed, Nov 17, 2010 at 10:12 AM, Jim Dunham wrote:
> sridhar,
>
> > I have done the following (which is required for my case)
> >
> > Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
> > created a array level snapshot of
On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
>
> For iSCSI one just needs to have a second (third or fourth...) iSCSI session
> on a different IP to the target and run mpio/mpxio/m
sridhar,
> I have done the following (which is required for my case)
>
> Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
> created a array level snapshot of the device using "dscli" to another device
> which is successful.
> Now I make the snapshot device visible to anot
Derek,
> I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6
> months). I recently added 6 new drives to one of my servers and I would like
> to create a new RAIDZ2 pool called 'marketData'.
>
> I figured the command to do this would be something like:
>
> zpool create mar
On Oct 8, 2010, at 2:06 AM, Wolfraider wrote:
> We have a weird issue with our ZFS pool and COMSTAR. The pool shows online
> with no errors, everything looks good but when we try to access zvols shared
> out with COMSTAR, windows reports that the devices have bad blocks.
> Everything has been
Budy,
> No - not a trick question., but maybe I didn't make myself clear.
> Is there a way to discover such bad files other than trying to actually read
> from them one by one, say using cp or by sending a snapshot elsewhere?
As noted by your original email, ZFS reports on any corruption using t
t; 2010/5/4 Przemyslaw Ceglowski
>> mailto:prze...@ceglowski.net>>
>> Jim,
>>
>> On May 4, 2010, at 3:45 PM, Jim Dunham wrote:
>>
>>>>
>>>> On May 4, 2010, at 2:43 PM, Richard Elling wrote:
>>>>
>>>>> On
Przem,
> On May 4, 2010, at 2:43 PM, Richard Elling wrote:
>
>> On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
>>
>>> It does not look like it is:
>>>
>>> r...@san01a:/export/home/admin# svcs -a | grep iscsi
>>> online May_01 svc:/network/iscsi/initiator:default
>>> online
Frank Middleton wrote:
On 10/13/09 18:35, Albert Chin wrote:
Maybe this will help:
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html
Well, it does seem to explain the scrub problem. I think it might
also explain the slow boot and startup problem - the VM only
Mohammed Al Basti wrote:
hello experts,
i have cluster3.2/ZFS and AVS4 in main site and ZFS/AVS4
in DR , i am trying to replicate ZFS volumes using AVS, i am getting
the below error
"sndradm: Error: volume "/dev/rdsk/
c4t600A0B80005B1E5702934A27A8CCd0s0" is not part of a dis
Paul,
Is it possible to replicate an entire zpool with AVS?
Yes. http://blogs.sun.com/AVS/entry/is_it_possible_to_replicate
- Jim
From what I see, you can replicate a zvol, because AVS is filesystem
agnostic. I can create zvols within a pool, and AVS can replicate
replicate those, but th
Ian,
Ian Collins wrote:
I have a volume in a pool that was created under Solaris 10 update
6 that I was sharing over iSCSI to some VMs. The pool in now
imported on an update 7 system.
For some reason, the volume won't share. shareiscsi is on, but
iscsiadm list target shows nothing.
James,
The links to the Part 1 and Part 2 demos on this page (http://www.opensolaris.org/os/project/avs/Demos/
) appear to be broken.
http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/
http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/
They still work for me. What
On Mar 4, 2009, at 7:04 AM, Jacob Ritorto wrote:
Caution: I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older. It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to
Nicolas,
On Fri, Mar 06, 2009 at 10:05:46AM -0700, Neil Perrin wrote:
On 03/06/09 08:10, Jim Dunham wrote:
A simple test I performed to verify this, was to append to a ZFS
file
(no synchronous filesystem options being set) a series of blocks
with a
block order pattern contained within. At
Andrew,
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the
ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately
for SNDR, ZFS caches a lot of an applications filesystem data in
the
A recent increase in email about ZFS and SNDR (the replication
component of Availability Suite), has given me reasons to post one of
my replies.
Well, now I'm confused! A collegue just pointed me towards your blog
entry about SNDR and ZFS which, until now, I thought was not a
supported co
BJ Quinn wrote:
> Then what if I ever need to export the pool on the primary server
> and then import it on the replicated server. Will ZFS know which
> drives should be part of the stripe even though the device names
> across servers may not be the same?
Yes, "zpool import " will figur
ter solution.
Jim Dunham
Engineering Manager
Sun Microsystems, Inc.
Storage Platform Software Group
> What's required
> to make it work? Consider a file server running ZFS that exports a
> volume with Iscsi. Consider also an application server that imports
> the LUN with
Stefan,
> one question related with this: would KAIO be supported on such
> configuration ?
Yes, but not as one might expect.
As seen from the truss output below, the call to kaio() fails with
EBADFD, a direct result of the fact that for ZFS its cb_ops interface
for asynchronous read and w
ks!
> --
> This message posted from opensolaris.org
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform S
ed from opensolaris.org
> ___________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Tim,
> I took a look at the archives and I have seen a few threads about
> using
> array block level snapshots with ZFS and how we face the "old issue"
> that we used to see with logical volumes and unique IDs (quite
> correctly) stopping the same volume being presented twice to the same
> se
Richard Elling wrote:
> Jim Dunham wrote:
>> Ahmed,
>>
>>> The setup is not there anymore, however, I will share as much
>>> details
>>> as I have documented. Could you please post the commands you have
>>> used
>>> and any differen
Ahmed,
> The setup is not there anymore, however, I will share as much details
> as I have documented. Could you please post the commands you have used
> and any differences you think might be important. Did you ever test
> with 2008.11 ? instead of sxce ?
Specific to the following:
>>> While we
Ahmed,
> Thanks for your informative reply. I am involved with kristof
> (original poster) in the setup, please allow me to reply below
>
>> Was the follow 'test' run during resynchronization mode or
>> replication
>> mode?
>>
>
> Neither, testing was done while in logging mode. This was chosen
Kristof,
> Jim Yes, in step 5 commands were executed on both nodes.
>
> We did some more tests with opensolaris 2008.11. (build 101b)
>
> We managed to get AVS setup up and running, but we noticed that
> performance was really bad.
>
> When we configured a zfs volume for replication, we noticed
Richard,
> Ross wrote:
>> The problem is they might publish these numbers, but we really have
>> no way of controlling what number manufacturers will choose to use
>> in the future.
>>
>> If for some reason future 500GB drives all turn out to be slightly
>> smaller than the current ones you'
Brad,
> I'd like to track a server's ZFS pool I/O throughput over time.
> What's a good data source to use for this? I like zpool iostat for
> this, but if I poll at two points in time I would get a number since
> boot (e.g. 1.2M) and a current number (e.g. 1.3K). If I use the
> current nu
Roch Bourbonnais wrote:
Le 4 janv. 09 à 21:09, milosz a écrit :
thanks for your responses, guys...
the nagle's tweak is the first thing i did, actually.
not sure what the network limiting factors could be here... there's
no switch, jumbo frames are on... maybe it's the e1000g driver?
it's bee
Andrew,
> I woke up yesterday morning, only to discover my system kept
> rebooting..
>
> It's been running fine for the last while. I upgraded to snv 98 a
> couple weeks back (from 95), and had upgraded my RaidZ Zpool from
> version 11 to 13 for improved scrub performance.
>
> After some res
George,
> I'm looking for any pointers or advice on what might have happened
> to cause the following problem...
To run Oracle RAC on iSCSI Target LUs, accessible by three or more
iSCSI Initiator nodes, requires support for SCSI-3 Persistent
Reservations. This functionality was added to OpenS
t; --
> -Gary Mills--Unix Support--U of M Academic Computing and
> Networking-
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
iscuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sed or controller based mirroring software.
>
>
> --Joe
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
any new
ZFS filesystem writes on the local vdev, both of which will be
replicated by AVS.
It is the mixture of both resilvering writes, and new ZFS filesystem
writes, that make it impossible for AVS to make replication 'smarter'.
> --
> Brent Jones
> [EMAIL PROTECTED]
> _
On Sep 16, 2008, at 5:39 PM, Miles Nordin wrote:
>>>>>> "jd" == Jim Dunham <[EMAIL PROTECTED]> writes:
>
>jd> If at the time the SNDR replica is deleted the set was
>jd> actively replicating, along with ZFS actively writing to the
>
deleted the set was actively
replicating, along with ZFS actively writing to the ZFS storage pool,
I/O consistency will be lost, leaving ZFS storage pool in an
indeterministic state on the remote node. To address this issue,
prior to deleting the replicas,
gt;
> I would expect AVS ``sync'' mode to provide (1) and (2), so the
> question is only about ``async'' mode failovers.
>
> so...based on my reasoning, it's UNSAFE to use AVS in async mode for
> ZFS replication on any pool which needs more than 1 device to hav
gt; pr2# zpool import -f tank
> cannot import 'tank': one or more devices is currently unavailable
> pr2#
> ---
>
> Importing on the primary gives the same error.
>
> Anyone have any ideas?
>
> Thanks
>
>
On Sep 11, 2008, at 5:16 PM, A Darren Dunham wrote:
> On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote:
>>
>> On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
>>
>>> On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
>>>> The i
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
> On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
>> The issue with any form of RAID >1, is that the instant a disk fails
>> out of the RAID set, with the next write I/O to the remaining members
>> of the R
(from scratch?), and re-
> replicates itself??
>
> thanks in advance.
>
> -Matt
> --
> This message posted from opensolaris.org
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinf
t; Microsystems and that AVS is your main concern. But if there's one
> thing I cannot withstand, it's getting stroppy replies from someone
> who should know better and should have realized that he's acting
> publicly and in front of the people who finance his income
NA, SCSA
>
> Tel. +49-721-91374-3963
> [EMAIL PROTECTED] - http://web.de/
>
> 1&1 Internet AG
> Brauerstraße 48
> 76135 Karlsruhe
>
> Amtsgericht Montabaur HRB 6484
>
> Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Thomas
> Gottschlich, Matthias Greve, Robert Hoffmann, Markus Huhn,
| +81 (0)90-5578-8500 (cell)
> Japan| +81 (0)3 -3375-1767 (home)
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/ma
ate a ext2 partition and use a linux rescue
> cd to backup the zfs partition with dd ?
>
>
> This message posted from opensolaris.org
> ___________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> ht
Steve,
> Can someone tell me or point me to links that describe how to
> do the following.
>
> I had a machine that crashed and I want to move to a newer machine
> anyway. The boot disk on the old machine is fried. The two disks I
> was
> using for a zfs pool on that machine need to be moved t
Mertol Ozyoney wrote:
Hi All ;
There are a set of issues being looked at that prevent the VMWare ESX
server from working with the Solaris iSCSI Target.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6597310
At this time there is no target date when this issues will be
Vahid,
> We need to move about 1T of data from one zpool on EMC dmx-3000 to
> another storage device (dmx-3). DMX-3 can be visible on the same
> host where dmx-3000 is being used on or from another host.
> What is the best way to transfer the data from dmx-3000 to dmx-3?
> Is it possible to ad
discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
http://blogs.sun.com/avs
http://www.opensolaris.org/os/project/avs/
http://www.opensolaris.org/os/project/iscsitgt/
http://www.opensolaris.o
Enrico,
Is there any forecast to improve the efficiency of the replication
mechanisms of ZFS ? Fishwork - new NAS release
I would take some time to talk with and understand exactly what the
customer's expectation are for replication. i would not base my
decision on the cost of replica
wonder why this functionality was not
exposed as part of zpool support?
- Jim
# zpool import foopool barpool
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Du
zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.o
com/storagetek/white-papers/data_replication_strategies.pdf
http://www.sun.com/storagetek/white-papers/enterprise_continuity.pdf
>
>>> Thanks
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolar
.
One more thing. ZFS and iSCSI start and stop at different times during
Solaris boot and shutdown, so I would recommend using legacy mount
points, or manual zpool import / exports when trying configurations at
this level.
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
> Jan Dreyer
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.4042
http://blogs.sun
Kory,
> Yes, I get it now. You want to detach one of the disks and then readd
> the same disk, but lose the redundancy of the mirror.
>
> Just as long as you realize you're losing the redundancy.
>
> I'm wondering if zpool add will complain. I don't have a system to
> try this at the moment.
The
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.
_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.4042
http://blogs.sun.com/avs
http://www.opensolaris.org/os
tically take a
snapshot prior to re-synchronization, and automatically delete the
snapshot if completed successfully. The use of I/O consistency groups
assure that not only are the replicas write-order consistent during
replication, but also that snapshots taken prior to re-
synchroniza
n folks want to
> look at the details (case #65684887).
>
> I'm getting very desperate to get this fixed, as this massive
> amount of storage was the only reason I got this M80...
>
> Any pointers would be greatly appreciated.
>
> Thanks-
> John Tracy
/
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
http://blogs.sun.com/avs
regards
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL
______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
htt
oo bad the X4500 has too few PCI slots to consider buying iSCSI
> cards.
HBA manufactures have in the past created multi-port, and multi-
function HBAs. I would expect there to be something out there, or out
there soon which will address the need of limited PCI slots.
> The two existing slots are already need
Ralf,
> Torrey McMahon wrote:
>> AVS?
>>
> Jim Dunham will probably shoot me, or worse, but I recommend thinking
> twice about using AVS for ZFS replication.
That's is why the call this a discussion group, as it encourages
differing opinions,
> Basically, you only
__
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
h
; zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
http://blogs.sun.com/avs
_
nternet AG
> Brauerstraße 48
> 76135 Karlsruhe
>
> Amtsgericht Montabaur HRB 6484
>
> Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich,
> Andreas Gauger, Matthias Greve, Robert Hoffmann, Norbert Lang,
> Achim Weiss
> Aufsichtsratsvorsitzender: Michael Scheeren
ool.
Any help is much appreciated,
paul
___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua,
you are interested in developing an
OpenSolaris project for either FS encryption or compression as a new set
of filter drivers, I will post relevant information tomorrow in
[EMAIL PROTECTED]
Jim Dunham
Rayson
___
zfs-discuss mailing list
zfs
Ben Rockwood wrote:
Jim Dunham wrote:
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR> I've been playing with replication of a ZFS Zpool using the
BR> recently released AVS. I'm pleased with things, but just
BR> replicating the data is only part o
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR> I've been playing with replication of a ZFS Zpool using the
BR> recently released AVS. I'm pleased with things, but just
BR> replicating the data is only part of the problem. The big
BR> question is: can I have a zpool op
Frank,
On Fri, 2 Feb 2007, Torrey McMahon wrote:
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
befor
Ben,
I've been playing with replication of a ZFS Zpool using the recently released AVS. I'm pleased with things, but just replicating the data is only part of the problem. The big question is: can I have a zpool open in 2 places?
No. The ability to have a zpool open in two place would req
Jason,
Thank you for the detailed explanation. It is very helpful to
understand the issue. Is anyone successfully using SNDR with ZFS yet?
Of the opportunities I've been involved with the answer is yes, but so
far I've not seen SNDR with ZFS in a production environment, but that
does not mean
volume manager or block device).
So all that needs to be done is to design and build a new variant of the
letter 'h', and find the place to separate ZFS into two pieces.
- Jim Dunham
That would be slick alternative to send/recv.
Best Regards,
Jason
On 1/26/07, Jim Dunham <[EMA
rage, please look for
email discussion for this project at:
A complete set of Availability Suite administration guides can be found at:
http://docs.sun.com/app/docs?p=coll%2FAVS4.0
Project Lead:
Jim Dunham http://www.opensolaris.org/viewProfile.jspa?username=jdunham
Availability Suite - New So
Richard Elling wrote:
Danger Will Robinson...
Jeff Victor wrote:
Jeff Bonwick wrote:
If one host failed I want to be able to do a manual mount on the
other host.
Multiple hosts writing to the same pool won't work, but you could
indeed
have two pools, one for each host, in a dual active-
Jaime,
On SPARC issue "format -e" (expert mode). When you now type "format",
it will list the option of either an EFI labeled disk or VTOC labeled
disk.
Jim
Hi to all, I used the whole disk
with zpool on X86 and SPARC.
The zpool command change de original VTOC to:
* /dev/rdsk/c0t8d0s0 pa
87 matches
Mail list logo