11:04pm, Paul Archer wrote:
Cool.
FWIW, there appears to be an issue with the LSI 150-6 card I was using. I
grabbed an old server m/b from work, and put a newer PCI-X LSI card in it,
and I'm getting write speeds of about 60-70MB/sec, which is about 40x the
write speed I was seeing with the ol
Cool.
FWIW, there appears to be an issue with the LSI 150-6 card I was using. I
grabbed an old server m/b from work, and put a newer PCI-X LSI card in it,
and I'm getting write speeds of about 60-70MB/sec, which is about 40x the
write speed I was seeing with the old card.
Paul
Tomorrow, Rob
Paul Archer wrote:
In light of all the trouble I've been having with this zpool, I bought
a 2TB drive, and I'm going to move all my data over to it, then
destroy the pool and start over.
Before I do that, what is the best way on an x86 system to
format/label the disks?
if entire disk is
In light of all the trouble I've been having with this zpool, I bought a
2TB drive, and I'm going to move all my data over to it, then destroy the
pool and start over.
Before I do that, what is the best way on an x86 system to format/label
the disks?
Thanks,
Paul
_
Paul,
Thanks for additional data, please see comments inline.
Paul Archer wrote:
7:56pm, Victor Latushkin wrote:
While 'zdb -l /dev/dsk/c7d0s0' shows normal labels. So the new
question is: how do I tell ZFS to use c7d0s0 instead of c7d0? I can't
do a 'zpool replace' because the zpool isn't
7:56pm, Victor Latushkin wrote:
While 'zdb -l /dev/dsk/c7d0s0' shows normal labels. So the new question is:
how do I tell ZFS to use c7d0s0 instead of c7d0? I can't do a 'zpool
replace' because the zpool isn't online.
ZFS actually uses c7d0s0 and not c7d0 - it shortens output to c7d0 in case
On 28.09.09 18:09, Paul Archer wrote:
8:30am, Paul Archer wrote:
And the hits just keep coming...
The resilver finished last night, so rebooted the box as I had just
upgraded to the latest Dev build. Not only did the upgrade fail (love
that instant rollback!), but now the zpool won't come onl
8:30am, Paul Archer wrote:
And the hits just keep coming...
The resilver finished last night, so rebooted the box as I had just upgraded
to the latest Dev build. Not only did the upgrade fail (love that instant
rollback!), but now the zpool won't come online:
r...@shebop:~# zpool import
poo
Yesterday, Paul Archer wrote:
I estimate another 10-15 hours before this disk is finished resilvering and
the zpool is OK again. At that time, I'm going to switch some hardware out
(I've got a newer and higher-end LSI card that I hadn't used before because
it's PCI-X, and won't fit on my cur
1:19pm, Richard Elling wrote:
The other thing that's weird is the writes. I am seeing writes in that
3.5MB/sec range during the resilver, *and* I was seeing the same thing
during the dd.
This is from the resilver, but again, the dd was similar. c7d0 is the
device in question:
r/sw/s
On Sep 27, 2009, at 8:49 AM, Paul Archer wrote:
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/
dsk/c7d0'), and iostat showed me that the d
On Sep 27, 2009, at 1:44 PM, Paul Archer wrote:
My controller, while normally a full RAID controller, has had its
BIOS turned off, so it's acting as a simple SATA controller. Plus,
I'm seeing this same slow performance with dd, not just with ZFS.
And I wouldn't think that write caching wou
My controller, while normally a full RAID controller, has had its BIOS
turned off, so it's acting as a simple SATA controller. Plus, I'm seeing
this same slow performance with dd, not just with ZFS. And I wouldn't think
that write caching would make a difference with using dd (especially
writ
On Sep 27, 2009, at 11:49 AM, Paul Archer wrote:
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/
dsk/c7d0'), and iostat showed me that the
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some point, I
was zeroing out the drive (with 'dd if=/dev/zero of=/dev/dsk/c7d0'), and
iostat showed me that the drive was only writing at around 3.5MB/sec. *And*
it s
On Sep 27, 2009, at 3:19 AM, Paul Archer wrote:
So, after *much* wrangling, I managed to take on of my drives
offline, relabel/repartition it (because I saw that the first sector
was 34, not 256, and realized there could be an alignment issue),
and get it back into the pool.
Problem is t
So, after *much* wrangling, I managed to take on of my drives offline,
relabel/repartition it (because I saw that the first sector was 34, not
256, and realized there could be an alignment issue), and get it back into
the pool.
Problem is that while it's back, the performance is horrible. It's
> This controller card, you have turned off any raid functionality, yes? ZFS
> has total control of all discs, by itself? No hw raid intervening?
> --
> This message posted from opensolaris.org
>
>
yes, it's an LSI 150-6, with the BIOS turned off, which turns it into a
dumb SATA card.
Paul
_
This controller card, you have turned off any raid functionality, yes? ZFS has
total control of all discs, by itself? No hw raid intervening?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
Oh, for the record, the drives are 1.5TB SATA, in a 4+1 raidz-1 config.
All the drives are on the same LSI 150-6 PCI controller card, and the M/B
is a generic something or other with a triple-core, and 2GB RAM.
Paul
3:34pm, Paul Archer wrote:
Since I got my zfs pool working under solaris (I
Since I got my zfs pool working under solaris (I talked on this list
last week about moving it from linux & bsd to solaris, and the pain that
was), I'm seeing very good reads, but nada for writes.
Reads:
r...@shebop:/data/dvds# rsync -aP young_frankenstein.iso /tmp
sending incremental file lis
21 matches
Mail list logo