Kevin Chadwick wrote:

> I almost completely agree, but also disagree and yes I'd say it's not
> worth getting into again. I would have to check the latest developments
> as I can imagine an algorithm which solved the problem during idle
> periods or didn't use it's full capacity but currently I don't agree
> fully with "huge amounts of data". The problem was reduced immensely by
> spreading writes across all free sectors rather than sequentially but I
> believe? the problem re-appears on a busy nearly full disk. I would also
> hope/imagine the only affect would be getting bad sectors in that area
> but I haven't looked into it very far as I currently have no need to
> and so maybe I should shut up untill I do. However, I for one will not
> be treating SSDs like HDDs in all applications of disks untill after I
> learn more.

One thing you might consider... buy a SSD and do some testing. Attach it
to an OpenBSD box, put a file system on it, then write a script similar
to this to repeatedly fill and empty the file system:

while :
  do
        dd if=/dev/arandom of=big_un.bin bs=64k
        sync
        sleep 1
        rm -P big_un.bin
 done

Let that run for a few years and see how long the disk actually lasts.
You could put up a website with live results. You'd become famous too...
especially if you hit the decade mark and the thing still works :)

Also, I just noticed that the high-end Intel SSDs claim 2,000,000 hours
MTBF. I wonder why they market that number and then say "3 year
warranty". There's only roughly 26,280 hours in a three year period.

Brad

Reply via email to