Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
If you look into your /var/adm/messages file, you should see
more than a few seconds' worth of IO retries, indicating that
there was a delay before panicing while waiting for the device
to return.

My original post contains all the warnings. The first error happened
at 20:30:21 and the system paniced at 20:30:21. It makes me wonder if
there is something else going here.

That's surprising. My experience of non-redundant pools (root pools
no less :>) is that there would be several minutes of retries, when
all the sd and lower layers' retries were added up.

Answering your second question, all ZFS pools should be configured
with redundancy from ZFS' point of view.

I am sure this is the right answer, but it is not obvious to me how I
would do this like I do this with UFS file systems using the SAN as
the redundant file backing. Thanks for the feedback.

create 2 luns on your san.
zone them so your host can see them
zpool create poolname mirror vdev1 vdev2
zfs create poolname/fsname


For my ultra20, I have / + /usr + /var and some of /opt mirrored using
svm, and then I have an uber-pool to contain everything else:


$ zdb -C
sink
    version=3
    name='sink'
    state=0
    txg=4
    pool_guid=6548940762722570489
    vdev_tree
        type='root'
        id=0
        guid=6548940762722570489
        children[0]
                type='mirror'
                id=0
                guid=5106440632267737007
                metaslab_array=13
                metaslab_shift=31
                ashift=9
                asize=307077840896
                children[0]
                        type='disk'
                        id=0
                        guid=9432259574297221550
                        path='/dev/dsk/c1d0s3'
                        devid='id1,[EMAIL PROTECTED]/d'
                        whole_disk=0
                children[1]
                        type='disk'
                        id=1
                        guid=7176220706626775710
                        path='/dev/dsk/c2d0s3'
                        devid='id1,[EMAIL PROTECTED]/d'
                        whole_disk=0


whch I created by first slicing the disks, then running

 # zpool create sink mirror c1d0s3 c2d0s3


Under the "sink" zpool, I have a few zfs:


$ zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
sink                     119G   161G  24.5K  /sink
sink/hole                574M   161G   574M  /opt/csw
sink/home               2.22G   161G  2.22G  /export/home
sink/scratch            96.1G   161G  96.1G  /scratch
sink/src                6.66G   161G  6.66G  /opt/gate
sink/swim                555M   161G   555M  /opt/local
sink/zones              12.9G   161G  27.5K  /zones
sink/zones/kitchensink  10.6G   161G  10.6G  /zones/kitchensink


which I created with

# zfs create sink/hole
# zfs create sink/home

etc etc.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
              http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to