Hy Peter and all,

after a couple of reboot the PowerPath and reconfiguring the LUN
exported by the Clariion now all seems to working fine.

Following the output of "zpool status":

---
machine# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        tank                       ONLINE       0     0     0
          raidz1                   ONLINE       0     0     0
            c3t5006016941E0222Ed3  ONLINE       0     0     0
            c3t5006016141E0222Ed1  ONLINE       0     0     0
            c2t5006016841E0222Ed2  ONLINE       0     0     0
            c3t5006016141E0222Ed0  ONLINE       0     0     0

errors: No known data errors
machine#
---

Meanwhile, I did some benchmarching on "raidz" zfs pool. Normally,
with application running on a zone where open, read and move thousand
of file (average dimension of that file is around 2k) I gave this
result:

--
machine# zpool iostat 2
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank        7.56G   111G      4    233  25.7K  1.37M
tank        7.56G   111G      0      0      0    254
tank        7.56G   111G      0    866      0  5.35M
tank        7.56G   111G      0      0      0      0
tank        7.56G   111G      0    389      0  1.23M
tank        7.56G   111G      0      0      0      0
tank        7.56G   111G      0     79      0   237K
tank        7.57G   111G      0    769      0  4.29M
tank        7.57G   111G      0      0      0      0
tank        7.57G   111G      0    580      0  1.67M
tank        7.57G   111G      0      0      0      0
tank        7.57G   111G      0    152      0   472K
tank        7.57G   111G      0    631      0  3.36M
tank        7.57G   111G      0      0      0      0
tank        7.57G   111G      0     98      0   439K
tank        7.57G   111G      0      0      0      0
tank        7.57G   111G      0    191      0   505K
tank        7.57G   111G      0    873      0  5.77M
tank        7.57G   111G      0      0      0      0
tank        7.57G   111G      0    163      0   463K
tank        7.57G   111G      0      0      0      0
tank        7.57G   111G      0    252  64.2K   817K
tank        7.57G   111G      0    550      0  4.90M
^C
machine#
---

Can I achieve more performance?

Thanks.

Cesare
On Sun, Jun 22, 2008 at 5:48 PM, Peter Tribble <[EMAIL PROTECTED]> wrote:
> On Sun, Jun 22, 2008 at 2:06 PM, Cesare <[EMAIL PROTECTED]> wrote:
>> Hy,
>>
>> I'm facing to a problem where I configure and create a zpool on my
>> test bed. The hardware is: T-5120 with Solaris10 with latest patch and
>> Clariion CX3 attached by 2 HBA. In this type of configuration every
>> LUN exported by Clariion is viewed 4 times by operating system.
>>
>> If I configure the latest disk by using a controller the "zfs create"
>> doesn't working telling me that there is a devices currently
>> unavailable. If I'll use a different controller  (but is the same LUN
>> from the Clariion) I'll not encountered the problem and the raidz pool
>> is created. I'm willing to use that controller for balance the I/O
>> between HBA and storage processor.
>
> My experience is that zfs + powerpath + clariion doesn't work.
>
> (Try a 'zpool export' followed by 'zpool import' - do you get your pool back?)
>
> For this I've had to get rid of powerpath and use mpxio instead.
>
> The problem seems to be that the clariion arrays are active/passive and
> zfs trips up if it tries to use one of the passive links. Using mpxio hides
> this and works fine. And powerpath on the (active/active) DMX-4 seems
> to be OK too.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to