On Wed, Jan 20, 2016 at 07:28:38AM +0000, James Harper wrote:
> As long as I remember to replace the To: with luv-main each time I
> reply, I guess it's workable.
that happens even on just plain Replies, too - not just Reply-All?
that's weird because the list munges the From: address, so a reply
should go to the list.
> > 233 Remaining_Lifetime_Perc 0x0000 067 067 000 Old_age Offline
> > -
> > 67
>
> 233 is reported as Media Wearout Indicator on the drives I just
> checked on a BSD box, so I guess it's the same thing but with a
> different description for whatever reason.
i dunno if that name comes from the drive itself or from the smartctl
software. that could be the difference.
> > I assume that means I´ve used up about 1/3rd of its expected life. Not
> > bad, considering i've been running it for 500 days total so far:
> >
> > 9 Power_On_Hours 0x0000 100 100 000 Old_age Offline
> > -
> > 12005
> >
> > 12005 hours is 500 days. or 1.3 years.
>
> I just checked the server that burned out the disks pretty quick last
> time (RAID1 zfs cache, so both went around the same time), and it
i suppose read performance is doubled, but there's not really any point
in RAIDing L2ARC. it's transient data that gets wiped on boot anyway.
better to have two l2arc cache partitions and two ZIL partitions.
and not raiding the l2arc should spread the write load over the 2 SSDs
and probably increase longevity.
my pair of OCZ drives have mdadm RAID-1 (xfs) for the OS + /home and
another 1GB RAID1 (ext4) for /boot, and just partitions for L2ARC and
ZIL. zfs mirrors the ZIL (essential for safety, don't want to lose the
ZIL if one drive dies!) if you give it two or more block devices anyway,
and it uses two or more block devices as independent L2ARCs (so double
the capacity).
$ zpool status export -v
pool: export
state: ONLINE
scan: scrub repaired 0 in 4h50m with 0 errors on Sat Jan 16 06:03:30 2016
config:
NAME STATE READ WRITE CKSUM
export ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
logs
sdh7 ONLINE 0 0 0
sdj7 ONLINE 0 0 0
cache
sdh6 ONLINE 0 0 0
sdj6 ONLINE 0 0 0
errors: No known data errors
this pool is 4 x 1TB. i'll probably replace them later this year with
one or two mirrored pairs of 4TB drives. I've gone off RAID-5 and
RAID-Z. even with ZIL and L2ARC, performance isn't great, nowhere near
what RAID-10 (or two mirrored pairs in zfs-speak) is. like my backup pool.
$ zpool status backup -v
pool: backup
state: ONLINE
scan: scrub repaired 0 in 4h2m with 0 errors on Sat Jan 16 05:15:20 2016
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdi ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdd ONLINE 0 0 0
sdc ONLINE 0 0 0
errors: No known data errors
this pool has the 4 x 4TB Seagate SSHDs i mentioned recently. it stores
backups for all machines on my home network.
> > and that's for an OCZ Vertex, one of the last decent drives OCZ made
> > before they started producing crap and went bust (and subsequently
> > got
sorry, my mistake. i meant OCZ Vector.
sdh OCZ-VECTOR_OCZ-0974C023I4P2G1B8
sdj OCZ-VECTOR_OCZ-8RL5XW08536INH7R
> I've seen too many OCZ's fail within months of purchase recently, but
> not enough data points to draw conclusions from. Maybe a bad batch or
> something? They were all purchased within a month or so of each other,
> late last year. The failure mode was that the system just can't see
> the disk, except very occasionally, and then not for long enough to
> actually boot from.
i've read that the Toshiba-produced OCZs are pretty good now, so
possibly a bad batch. or sounds like you abuse the poor things with too
many writes.
even so, my next SSD will probably be a Samsung.
> Yep. I just got a 500GB 850 EVO for my laptop and it doesn't have
> any of the wearout indicators that I can see, but I doubt I'll get
> anywhere near close to wearing it out before it becomes obsolete.
that's not good. i wish disk vendors would stop crippling their SMART
implementations and treat it seriously.
craig
--
craig sanders <[email protected]>
_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main