My question regarding the 7000 series storage is in more of the perspective of
the HOST side ZFS config. It is my understanding that the 7000 storage displays
a FC lun to the host. Yes, this LUN is a ZFS lun in the 7000 storage, however
the host still sees this as only one LUN. If I configure a
Hi All,
If a zone root is on zfs but that zone also contains SAN attached UFS devices
what is recorded in a zfs snapshot of the zone?
Does the snapshot only contain the ZFS root info?
How would one recover this complete zone?
Thanks,
Shawn
--
This message posted from opensolaris.org
__
Prior to this fix ZFS would panic the systems in order to avoid data corruption
and loss of the zpool.
Now the pool goes into a degraded or faulted state and one can "try" the zpool
clear command to correct the issue. If this does not succeed a reboot is
required.
--
This message posted from
Just to put closure to this discussion about how CR 6565042 and 6322646 change
how ZFS functions with in the below scenario.
>ZFS no longer has the issue where loss of a single device (even
>intermittently) causes pool corruption. That's been fixed.
>
>That is, there used to be an issue in this
>In life there are many things that we "should do" (but often don't).
>There are always trade-offs. If you need your pool to be able to
>operate with a device missing, then the pool needs to have sufficient
>redundancy to keep working. If you want your pool to survive if a
>disk gets crushed by a w
>I went back and dug through some of my email, and the issue showed up as
>CR 6565042.
>
>That was fixed in b77 and s10 update 6.
>
>I looked at this CR, forgive me but I am not a ZFS engineer. Can you explain
>in, >simple terms, how ZFS now reacts to this? If it does not panic how does
>it insur
>I went back and dug through some of my email, and the issue showed up as
>CR 6565042.
>
>That was fixed in b77 and s10 update 6.
I looked at this CR, forgive me but I am not a ZFS engineer. Can you explain
in, simple terms, how ZFS now reacts to this? If it does not panic how does it
insure dat
or raid controllers failures
on the hardware array?
Does ZFS handle intermittent controller outages on the raid controllers
the same as what UFS would?
Thanks,
Shawn
Ian Collins wrote:
Shawn Joy wrote:
Hi All,
Its been a while since I touched zfs. Is the below still the case
with zfs and
>If you don't give ZFS any redundancy, you risk loosing you pool if there is
>data corruption.
Is this the same risk for data corruption as UFS on hardware based luns?
If we present one LUN to ZFS and choose not to ZFS mirror or do a raidz pool of
that LUN is ZFS able to handle disk or raid co
Hi All,
Its been a while since I touched zfs. Is the below still the case with zfs and
hardware raid array? Do we still need to provide two luns from the hardware
raid then zfs mirror those two luns?
http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid
Thanks,
Shawn
--
This message
Thanks Cindy and Darren
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is it supported to use zpool export and zpool import to manage disk access
between two nodes that have access to the same storage device.
What issues exist if the host currently owning the zpool goes down? In this
case will using zpool import -f work? Is there possible data corruption issues?
I have read the ZFS best practice guide located at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
However I have questions whether we support using slices for data on the
same disk as we use for ZFS boot. What issues does this create if we
have a disk failure in a mirro
If one chooses to do this what happens if you have a disk failure.
>From the ZFS Best practices guide.
The recovery process of replacing a failed disk is more complex when disks
contain both ZFS and UFS file systems on
slices.
Shawn
--
This message posted from opensolaris.org
___
Hi All,
I see from the zfs Best practices guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS Root Pool Considerations
* A root pool must be created with disk slices rather than whole disks.
Allocate the entire disk capacity for the root pool to slice 0, for
What are the commands? Everything I see is c1t0d0, c1t1d0. no
slice just the completed disk.
Robert Milkowski wrote:
> Hello Shawn,
>
> Thursday, December 13, 2007, 3:46:09 PM, you wrote:
>
> SJ> Is it possible to bring one slice of a disk under zfs controller and
> SJ> leave the other
Is it possible to bring one slice of a disk under zfs controller and
leave the others as ufs?
A customer is tryng to mirror one slice using zfs.
Please respond to me directly and to the alias.
Thanks,
Shawn
___
zfs-discuss mailing list
zfs-discuss@op
d pull cables all the time and have yet
to see a zfs kernel panic. Is this something you've considered? I
haven't seen the bug in question, but I definitely have not run into it
when running mpxio.
--Tim
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
OK,
But lets get back to the original question.
Does ZFS provide you with less features than UFS does on one LUN from a SAN
(i.e is it less stable).
>ZFS on the contrary checks every block it reads and is able to find the
>mirror
>or reconstruct the data in a raidz config.
>Therefore ZFS uses o
All,
I understand that ZFS gives you more error correction when using two LUNS from
a SAN. But, does it provide you with less features than UFS does on one LUN
from a SAN (i.e is it less stable).
Thanks,
Shawn
This message posted from opensolaris.org
20 matches
Mail list logo