Great news. Thanks for letting us know. Cindy
On 12/15/09 06:48, Cesare wrote:
Hy all,
after upgrade PowerPath (from 5.2 to 5.2 SP 2) and then retry commands
to create zpool, it was executed successfully:
--
r...@solaris10# zpool history
History for 'tank':
2009-12-15.14:37:00 zpool create -f tank mirror emcpower7a emcpower5a
2009-12-15.14:37:20 zpool add tank mirror emcpower8a emcpower6a
2009-12-15.14:37:56 zpool add tank mirror emcpower1a emcpower3a
2009-12-15.14:38:09 zpool add tank mirror emcpower2a emcpower4a
r...@solaris10# zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
emcpower7a ONLINE 0 0 0
emcpower5a ONLINE 0 0 0
mirror ONLINE 0 0 0
emcpower8a ONLINE 0 0 0
emcpower6a ONLINE 0 0 0
mirror ONLINE 0 0 0
emcpower1a ONLINE 0 0 0
emcpower3a ONLINE 0 0 0
mirror ONLINE 0 0 0
emcpower2a ONLINE 0 0 0
emcpower4a ONLINE 0 0 0
errors: No known data errors
--
before PowerPath Version was 5.2.0.GA.b146, now 5.2.SP2.b012:
--
r...@solaris10# pkginfo -l EMCpower
PKGINST: EMCpower
NAME: EMC PowerPath (Patched with 5.2.SP2.b012)
CATEGORY: system
ARCH: sparc
VERSION: 5.2.0_b146
BASEDIR: /opt
VENDOR: EMC Corporation
PSTAMP: beavis951018123443
INSTDATE: Dec 15 2009 12:53
STATUS: completely installed
FILES: 339 installed pathnames
42 directories
123 executables
199365 blocks used (approx)
--
So the SP2 incorporated the fix about PowerPath and ZFS using pseudo
emcpower device.
Cesare
On Mon, Dec 14, 2009 at 9:12 PM, Cindy Swearingen
<cindy.swearin...@sun.com> wrote:
Hi Cesare,
According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but
then the problem reoccurred.
According to the EMC PowerPath Release notes, here:
www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf
This problem is fixed in 5.2 SP1.
I would review the related ZFS information in this doc before proceeding.
Thanks,
Cindy
On 12/14/09 03:53, Cesare wrote:
On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston <mijoh...@gmail.com> wrote:
Thanks for the info Alexander... I will test this out. I'm just
wondering
what it's going to see after I install Power Path. Since each drive will
have 4 paths, plus the Power Path... after doing a "zfs import" how will
I
force it to use a specific path? Thanks again! Good to know that this
can
be done.
I had in the last weeks a similar problem. I have on my testbed server
(Solaris 10.x Update4) PowerPath 5.2 that it's connected on two FC
switch and then to Clariion CX3.
Each LUN on the Clariion create 4 path to the host. I created 8 LUN,
reconfigured Solaris for make them visible to the host and then tried
to create a ZFS pool. I encountered a problem when I run the command:
--
# r...@solaris10# zpool status
pool: tank
state: ONLINE
scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
emcpower7a ONLINE 0 0 0
emcpower5a ONLINE 0 0 0
mirror ONLINE 0 0 0
emcpower8a ONLINE 0 0 0
emcpower6a ONLINE 0 0 0
errors: No known data errors
r...@solaris10# zpool history
History for 'tank':
2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a
2009-12-11.05:00:01 zpool scrub tank
2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a
2009-12-14.05:00:01 zpool scrub tank
r...@solaris10# zpool add tank mirror emcpower3a emcpower1a
internal error: Invalid argument
Abort (core dumped)
r...@solaris#
--
Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then
retry the command to see if the problem (internal error) will be
disappear. Anybody did have a similar problem?
Cesare
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss