Here's the zpool layout. You don't really have a choice on the boot volume -- the system supports only two drives on the same chain. The remaining drives are as shown:

  pool: internal
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sat Oct  8 21:15:42 2011
config:

        NAME        STATE     READ WRITE CKSUM
        internal    ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c3t0d0  ONLINE       0     0     0
            c4t0d0  ONLINE       0     0     0
            c6t0d0  ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0
            c9t0d0  ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            c3t1d0  ONLINE       0     0     0
            c4t1d0  ONLINE       0     0     0
            c6t1d0  ONLINE       0     0     0
            c8t1d0  ONLINE       0     0     0
            c9t1d0  ONLINE       0     0     0
          raidz1-2  ONLINE       0     0     0
            c3t2d0  ONLINE       0     0     0
            c4t2d0  ONLINE       0     0     0
            c6t2d0  ONLINE       0     0     0
            c8t2d0  ONLINE       0     0     0
            c9t2d0  ONLINE       0     0     0
          raidz1-3  ONLINE       0     0     0
            c3t3d0  ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
            c6t3d0  ONLINE       0     0     0
            c8t3d0  ONLINE       0     0     0
            c9t3d0  ONLINE       0     0     0
          raidz1-4  ONLINE       0     0     0
            c3t4d0  ONLINE       0     0     0
            c4t4d0  ONLINE       0     0     0
            c6t4d0  ONLINE       0     0     0
            c8t4d0  ONLINE       0     0     0
            c9t4d0  ONLINE       0     0     0
          raidz1-5  ONLINE       0     0     0
            c3t5d0  ONLINE       0     0     0
            c4t5d0  ONLINE       0     0     0
            c6t5d0  ONLINE       0     0     0
            c8t5d0  ONLINE       0     0     0
            c9t5d0  ONLINE       0     0     0
          raidz1-6  ONLINE       0     0     0
            c3t6d0  ONLINE       0     0     0
            c4t6d0  ONLINE       0     0     0
            c6t6d0  ONLINE       0     0     0
            c8t6d0  ONLINE       0     0     0
            c9t6d0  ONLINE       0     0     0
          raidz1-7  ONLINE       0     0     0
            c3t7d0  ONLINE       0     0     0
            c4t7d0  ONLINE       0     0     0
            c6t7d0  ONLINE       0     0     0
            c8t7d0  ONLINE       0     0     0
            c9t7d0  ONLINE       0     0     0
        logs
          c7t2d0    ONLINE       0     0     0
          c7t6d0    ONLINE       0     0     0
        spares
          c7t1d0    AVAIL
          c7t5d0    AVAIL
          c7t3d0   AVAIL

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h6m with 0 errors on Sat Oct  8 21:21:54 2011
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c7t0d0s0  ONLINE       0     0     0
            c7t4d0s0  ONLINE       0     0     0

errors: No known data errors

It appears I mis-spoke on the caches. I thought I'd used two drives for cache, but apparently not.

And Solaris 11 is supposed to be out Real Soon Now.  :-)


On 10/14/11 02:54 PM, Jim Klimov wrote:
2011-10-14 23:57, Gregory Shaw пишет:
You might want to keep in mind that the X4500 was a ~2006 box, and had only PCI-X slots.

Or, at least, that's what the 3 Iv'e got have. I think the X4540 had PCIe, but I never got one of those. :-(

I haven't seen any cache accelerator PCI-X cards.

However, what I've done on the X4500 systems in the lab is to use two drives on the system disk bus for the cache and log devices (each).
So you have 44 data drives, 2 os drives and 2 zil/cache devices?
And what do you use for zil/cache? SSDs? Specific ones?

With the 175 release of Solaris 11, I have literally seen a scrub running at 960mb/sec, and around 400mb/sec for 10ge NFS.

Hmm, and where can you get that release in the open? ;)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to