On 5/15/07, Matthew Flanagan <[EMAIL PROTECTED]> wrote:
On 5/15/07, eric kustarz <[EMAIL PROTECTED]> wrote:
>
> On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote:
>
> >>
> >> On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
> >>
> >>> Hi,
> >>>
> >>> I have a test server that I use for testing my
> >> different jumpstart
> >>> installations. This system is continuously
> >> installed and
> >>> reinstalled with different system builds.
> >>> For some builds I have a finish script that creates
> >> a zpool using
> >>> the utility found in the Solaris 10 update 3
> >> miniroot.
> >>>
> >>> I have found an issue where the zpool command fails
> >> to create a new
> >>> zpool if the system previously had a UFS filesystem
> >> on the same slice.
> >>>
> >>> The command and error is:
> >>>
> >>> zpool create -f -R /a -m /srv srv c1t0d0s6
> >>> cannot create 'srv': one or more vdevs refer to the
> >> same device
> >>>
> >>
> >> Works fine for me:
> >> # df -kh
> >> Filesystem             size   used  avail capacity
> >>  Mounted on
> >> dev/dsk/c1t1d0s0       17G   4.1G    13G    24%    /
> >> ...
> >> /dev/dsk/c1t1d0s6       24G    24M    24G     1%
> >>    /zfs0
> >> mount /zfs0
> >> # zpool create -f -R /a -m /srv srv c1t1d0s6
> >> # zpool status
> >>    pool: srv
> >> te: ONLINE
> >> scrub: none requested
> >> config:
> >>
> >>          NAME        STATE     READ WRITE CKSUM
> >> srv         ONLINE       0     0     0
> >>            c1t1d0s6  ONLINE       0     0     0
> >>  known data errors
> >>
> >>
> >> eric
> >>
> >>
> >
> > That works for me too. Perhaps you should actually follow my steps
> > to reproduce the issue?
>
> Perhaps if you asked more nicely then i would.  If you didn't unmount
> the UFS filesystem "srv" before the 'zpool create', then try that.
> If you did and it still fails, then ask the install/jumpstart people.
>
> eric
>
>

Eric,

The UFS filesystem is being unmounted each time because the system is
being *reinstalled* each time from bare metal. The first time it is
being installed with a UFS file system on slice 6. The second time the
system is *reinstalled* with slice 6 left unnamed and a finish script
failing to create a zpool from the jumpstart miniroot. I can reliably
reproduce this in my lab on a number of different sparc hardware
platforms (V120 and V210's with both 1 and 2 disks).


I've done some further testing today and the problem seems to occur
regardless of whether the first installation was had UFS or ZFS on the
slice I try to create a new zpool on.

I have also discovered that if you run 'zpool create' a second time
(as I do in the create-zfs.fin below) after the first fails it will
succeed in creating the zpool.

Below are the JASS files I use to recreate the problem. I'm using JASS
4.2 with 122608-03 patch applied.

Is anyone else able to reproduce this issue using this set up?

==== rules ====
probe osname
probe memsize
probe hostaddress
probe hostname
probe disks
probe rootdisk
probe karch
hostname jstest1 - Profiles/test.profile Drivers/test.driver


==== Finish/create-zfs.fin ====
#!/bin/sh
#
# Create zpool
#

if check_os_min_revision 5.10; then
   ALT_ROOT="`echo ${JASS_ROOT_DIR}| sed -e 's,/*$,,g'`"
   if [ "${SI_ROOTDISK}X" != "X" ]; then
       ROOTDISK="`echo $SI_ROOTDISK | sed -e 's/s.$//g'`"
       vdev="${ROOTDISK}s6"
       mountpoint="/srv"
       zpool="srv"
       logMessage "Creating zpool: ${zpool}"
       zpool create -f -R ${ALT_ROOT} -m ${mountpoint} ${zpool} ${vdev}
       if [ $? -ne 0 ]; then
           logError "Failed to create zpool: ${zpool}"
           # second time zpool is run it succeeds
           zpool create -f -R ${ALT_ROOT} -m ${mountpoint} ${zpool} ${vdev}
           if [ $? -ne 0 ]; then
               logError "Failed to create zpool again: ${zpool}"
           fi
       fi
   fi
else
   logInvalidOSRevision "5.10+"
fi

==== Profiles/test.profile ====
install_type initial_install
system_type standalone
cluster         SUNWCrnet
partitioning explicit
filesys rootdisk.s0 6144 / logging
filesys rootdisk.s1 1024 swap
filesys rootdisk.s3 4096 /var logging,nosuid
filesys rootdisk.s6 free unnamed
filesys rootdisk.s7 50 unnamed

==== Drivers/test.driver ====
#!/bin/sh
#
#
DIR="`/bin/dirname $0`"
export DIR

. ${DIR}/driver.init


# Finish Scripts
JASS_SCRIPTS="
   create-zfs.fin
"

. ${DIR}/driver.run


regards

matthew


ps. I had already opened a support case with Sun before posting to the
list and the engineer's response was to email me back the "correct"
command syntax and a copy of the zpool man page which he had obviously
not read himself because his "correct" syntax was blatantly wrong.
Please make an effort to read my whole email. If you need any
clarifications on how to reproduce the problem then I'll be glad to
help.

pps. resending this because I was not subscribed to the list.

--
matthew
http://wadofstuff.blogspot.com



--
matthew
http://wadofstuff.blogspot.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to