I had a perfectly working 7 drive raidz pool using some on board STATA connectors and some on PCI SATA controller cards. My pool was using 500GB drives. I had the stupid idea to replace my 500GB drives with 2TB ( Mitsubishi ) drives. This process resulted in me loosing much of my data ( see my other post ). Now that I am picking up the pieces, I think I have tracked the problem down to some incompatibility with the drives and on board SATA. I can create pools on the controller card SATA slots, but not on the on board SATA. ( see below ). I can switch the two drives around and I can always create pools on the external (c11t0d0 ) SATA but never on the internal. However, with a 500GB drive it works fine on either one.
Does anyone know how to resolve this. Is there a bios update or some kind of patch or something? Please help. my motherboard is a MSI N1996. I have two,l so I tried the other one with the same result, so it's not a hardware failure. The other thing I notice is the drives look different to format. These are identical drives. # format AVAILABLE DISK SELECTIONS: 0. c3d0 <DEFAULT cyl 4859 alt 2 hd 255 sec 63> /p...@0,0/pci8086,3...@1c/pci-...@0/i...@1/c...@0,0 1. c6d1 <Hitachi- JK1131YAGGULD-0001-16777216.> /p...@0,0/pci-...@1f,2/i...@1/c...@1,0 2. c11t0d0 <ATA-Hitachi HDS72202-A20N-1.82TB> /p...@0,0/pci8086,3...@1c,2/pci1095,7...@0/d...@0,0 Specify disk (enter its number): Specify disk (enter its number): Specify disk (enter its number): zpool create test^C # # zpool destroy test3 # zpool create test3 c11t0d0 # zpool create test4 c6d1 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c6d1s0 is part of exported or potentially active ZFS pool test2. Please see zpool(1M). # zpool create -f test4 c6d1 cannot create 'test4': invalid argument for this pool operation -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss