I solved the mystery - an astounding 7 out of the 10 brand new disks I was
using were bad. I was using 4 at a time, and it wasn't until a good one got in
the mix that I realized what was wrong. FYI, these were Western Digital
WD15EADS and Samsung HD154UI. Each brand was mostly bad, with one or t
isainfo -k returns amd64, so I don't think that is the answer.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Apr 16, 2010 at 11:46:01AM -0700, Willard Korfhage wrote:
> The drives are recent - 1.5TB drives
I'm going to bet this is a 32-bit system, and you're getting screwed
by the 1TB limit that applies there. If so, you will find clues
hidden in dmesg from boot time about this, as the drives ar
> There should be no need to create partitions.
> Something simple like this
> hould work:
> zpool create junkfooblah c13t0d0
>
> And if it doesn't work, try "zpool status" just to
> verify for certain, that
> device is not already part of any pool.
It is not part of any pool. I get the same "ca
No Areca controller on this machine. It is a different box, and the drives are
just plugged into the SATA ports on the motherboard.
I'm running build svn_133, too.
The drives are recent - 1.5TB drives, 3 Western Digital and 1 Seagate, if I
recall correctly. They ought to support SATA-2. They ar
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Willard Korfhage
>
> devfsadm -Cv gave a lot of "removing file" messages, apparently for
> items that were not relevant.
That's good. If there were no necessary changes, devfsadm would say
no
Your adapter read-outs look quite different than mine. I am on ICH-9, snv_133.
Maybe that's why. But I thought I should ask on that occasion:
-build?
-do the drives currently support SATA-2 standard (by model, by jumper settings?)
- could it be that the Areca controller has done something to them
devfsadm -Cv gave a lot of "removing file" messages, apparently for items that
were not relevant.
cfgadm -al says, about the disks,
sata0/0::dsk/c13t0d0 disk connectedconfigured ok
sata0/1::dsk/c13t1d0 disk connectedconfigured ok
sata0/2::dsk/c13t2
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tonmaus
>
> are the drives properly configured in cfgadm?
I agree. You need to do these:
devfsadm -Cv
cfgadm -al
___
zfs-discuss m
Hi,
are the drives properly configured in cfgadm?
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm trying to set up a raidz pool on 4 disks attached to an Asus P5BV-M
motherboard with an Intel ICH7R. The bios lets me pick IDE, RAID, or AHCI for
the disks. I'm not interested in the motherboard's raid, and reading previous
posts, it sounded like there were performance advantages to picking
11 matches
Mail list logo