On 19/05/11 10:50, mark wrote:
Note 1:
I have seen an array that was powered on continuously for about six
years, which killed half the disks when it was finally powered down,
left to cool for a few hours, then started up again.
Recently we rebooted about 6 machines that had uptimes of 950+ d
> Note 1:
> I have seen an array that was powered on continuously for about six
> years, which killed half the disks when it was finally powered down,
> left to cool for a few hours, then started up again.
>
Recently we rebooted about 6 machines that had uptimes of 950+ days.
Last time fsck had
BTW, I saw a news article today about a brand of SSD that was claiming
to have the price effectiveness of MLC-type chips, but with lifetime of
4TB/day over 5 years.
http://www.storagereview.com/anobit_unveils_genesis_mlc_enterprise_ssds
which also links to:
http://www.storagereview.com/sandfor
On 05/05/11 18:36, Florian Weimer wrote:
* Greg Smith:
Intel claims their Annual Failure Rate (AFR) on their SSDs in IT
deployments (not OEM ones) is 0.6%. Typical measured AFR rates for
mechanical drives is around 2% during their first year, spiking to 5%
afterwards. I suspect that Intel's n
On 05/04/2011 08:31 PM, David Boreham wrote:
Here's my best theory at present : the failures ARE caused by cell
wear-out, but the SSD firmware is buggy in so far as it fails to boot
up and respond to host commands due to the wear-out state. So rather
than the expected outcome (SSD responds but
On 5/5/2011 2:36 AM, Florian Weimer wrote:
I'm a bit concerned with usage-dependent failures. Presumably, two SDDs
in a RAID-1 configuration are weared down in the same way, and it would
be rather inconvenient if they failed at the same point. With hard
disks, this doesn't seem to happen; even
* Greg Smith:
> Intel claims their Annual Failure Rate (AFR) on their SSDs in IT
> deployments (not OEM ones) is 0.6%. Typical measured AFR rates for
> mechanical drives is around 2% during their first year, spiking to 5%
> afterwards. I suspect that Intel's numbers are actually much better
> th
On Wed, May 4, 2011 at 9:34 PM, David Boreham wrote:
> On 5/4/2011 9:06 PM, Scott Marlowe wrote:
>>
>> Most of it is. But certain parts are fairly new, i.e. the
>> controllers. It is quite possible that all these various failing
>> drives share some long term ~ 1 year degradation issue like the
On 5/4/2011 9:06 PM, Scott Marlowe wrote:
Most of it is. But certain parts are fairly new, i.e. the
controllers. It is quite possible that all these various failing
drives share some long term ~ 1 year degradation issue like the 6Gb/s
SAS ports on the early sandybridge Intel CPUs. If that's th
On Wed, May 4, 2011 at 6:31 PM, David Boreham wrote:
>
> this). The technology and manufacturing processes are common across many
> different types of product. They either all work , or they all fail.
Most of it is. But certain parts are fairly new, i.e. the
controllers. It is quite possible th
On 5/4/2011 6:02 PM, Greg Smith wrote:
On 05/04/2011 03:24 PM, David Boreham wrote:
So if someone says that SSDs have "failed", I'll assume that they
suffered from Flash cell
wear-out unless there is compelling proof to the contrary.
I've been involved in four recovery situations similar to t
On 05/04/2011 03:24 PM, David Boreham wrote:
So if someone says that SSDs have "failed", I'll assume that they
suffered from Flash cell
wear-out unless there is compelling proof to the contrary.
I've been involved in four recovery situations similar to the one
described in that coding horror
12 matches
Mail list logo