This afternoon, messages like the following started appearing in
/var/adm/messages:
May 18 13:46:37 fs8 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,2...@1/pci15d9,a...@0 (mpt0):
May 18 13:46:37 fs8 Log info 0x3108 received for target 5.
May 18 13:46:37 fs8 scsi_status=0x0, ioc_stat
I solved the mystery - an astounding 7 out of the 10 brand new disks I was
using were bad. I was using 4 at a time, and it wasn't until a good one got in
the mix that I realized what was wrong. FYI, these were Western Digital
WD15EADS and Samsung HD154UI. Each brand was mostly bad, with one or t
isainfo -k returns amd64, so I don't think that is the answer.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> There should be no need to create partitions.
> Something simple like this
> hould work:
> zpool create junkfooblah c13t0d0
>
> And if it doesn't work, try "zpool status" just to
> verify for certain, that
> device is not already part of any pool.
It is not part of any pool. I get the same "ca
No Areca controller on this machine. It is a different box, and the drives are
just plugged into the SATA ports on the motherboard.
I'm running build svn_133, too.
The drives are recent - 1.5TB drives, 3 Western Digital and 1 Seagate, if I
recall correctly. They ought to support SATA-2. They ar
devfsadm -Cv gave a lot of "removing file" messages, apparently for items that
were not relevant.
cfgadm -al says, about the disks,
sata0/0::dsk/c13t0d0 disk connectedconfigured ok
sata0/1::dsk/c13t1d0 disk connectedconfigured ok
sata0/2::dsk/c13t2
I'm trying to set up a raidz pool on 4 disks attached to an Asus P5BV-M
motherboard with an Intel ICH7R. The bios lets me pick IDE, RAID, or AHCI for
the disks. I'm not interested in the motherboard's raid, and reading previous
posts, it sounded like there were performance advantages to picking
I've got a Supermicro AOC-USAS-L8I on the way because I gather from these
forums that it works well. I'll just wait for that, then try 8 disks on that an
4 on the motherboard SATA ports.
--
This message posted from opensolaris.org
___
zfs-discuss maili
As I mentioned earlier, I removed the hardware-based Raid6 array, changed all
the disks to passthrough disks, made a raidz2 pool using all the disk. I used
my backup program to copy 55GB of data to the disk, and now I have errors all
over the place.
# zpool status -v
pool: bigraid
state: DEG
These are all good reasons to switch back to letting ZFS handle it. I did put
about 600GB of data on the pool as configured with Raid 6 on the card, verified
the data, and scrubbed it a couple time in the process and there's no problems,
so it appears that the firmware upgrade fixed my problems.
I upgraded to the latest firmware. When I rebooted the machine, the pool was
back, with no errors. I was surprised.
I will work with it more, and see if it stays good. I've done a scrub, so now
I'll put more data on it and stress it some more.
If the firmware upgrade fixed everything, then I've
I was wondering if the controller itself has problems. My card's firmware is
version 1.42, and the firmware on the website is up to 1.48.
I see the firmware released in last September says
Fix Opensolaris+ZFS to add device to mirror set in JBOD or passthrough mode
and
Fix SATA raid controller
Just a message 7 hours earlier about an IRQ being shared by drivers with
different interrupt levels might result in reduced performance.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
IT is a Corsair 650W modular power supply, with 2 or 3 disks per cable.
However, the Areca card is not reporting any errors, so I think power to the
disks is unlikely to be a problem.
Here's what is in /var/adm/messages
Apr 11 22:37:41 fs9 fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-GH,
I'm struggling to get a reliable OpenSolaris system on a file server. I'm
running an Asus P5BV-C/4L server motherboard, 4GB ECC ram, an E3110 processor,
and an Areca 1230 with 12 1-TB disks attached. In a previous posting, it looked
like RAM or the power supply by be a problem, so I ended up upg
Yes, I was hoping to find the serial numbers. Unfortunately, it doesn't show
any serial numbers for the disk attached to the Areca raid card.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
Memtest didn't show any errors, but between Frank, early in the thread, saying
that he had found memory errors that memtest didn't catch, and remove of DIMMs
apparently fixing the problem, I too soon jumped to the conclusion it was the
memory. Certainly there are other explanations.
I see that
It certainly has symptoms that match a marginal power supply, but I measured
the power consumption some time ago and found it comfortably within the power
supply's capacity. I've also wondered if the RAM is fine, but there is just
some kind of flaky interaction of the ram configuration I had wit
Looks like it was RAM. I ran memtest+ 4.00, and it found no problems. I removed
2 of the 3 sticks of RAM, ran a backup, and had no errors. I'm running more
extensive tests, but it looks like that was it. A new motherboard, CPU and ECC
RAM are on the way to me now.
--
This message posted from op
Yeah, this morning I concluded I really should be running ECC ram. I sometimes
wonder why people people don't run ECC ram more frequently. I remember a decade
ago, when ram was much, much less dense, people fretted about alpha particles
randomly flipping bits, but that seems to have died down.
I would like to get some help diagnosing permanent errors on my files. The
machine in question has 12 1TB disks connected to an Areca raid card. I
installed OpenSolaris build 134 and according to zpool history, created a pool
with
zpool create bigraid raidz2 c4t0d0 c4t0d1 c4t0d2 c4t0d3 c4t0d4 c
21 matches
Mail list logo