Am Fri, 14 Apr 2017 09:37:09 +0200 schrieb Marc Joliet <mar...@gmx.de>:
> (Sorry for the late reply, I hope it's still useful to you.) NP. The links below were interesting. > On Dienstag, 4. April 2017 00:46:54 CEST Kai Krakow wrote: > > Am Mon, 3 Apr 2017 16:15:24 -0400 > > > > schrieb Rich Freeman <ri...@gentoo.org>: > > > On Mon, Apr 3, 2017 at 2:34 PM, Kai Krakow <hurikha...@gmail.com> > > > > > > wrote: > [...] > > > > > > If it contains data you'd prefer not be recoverable you might > > > want to use shred or ATA secure erase. > > > > I wonder if shredding adds any value with the high density of modern > > drives... Each bit is down to a "few" (*) atoms. It should be pretty > > difficult, if not impossible, to infer the previous data from it. I > > think most of the ability to infer the previous data comes from > > magnetic leakage from the written bit to the neighbor bits. And > > this is why clever mathematicians created series of alternating bit > > patterns to distribute this leakage evenly, which is the different > > algorithms the shredder programs use. > > > > Do you have any insights on that matter? Just curious. > > For the record, there was some discussion on this on this not too > long ago [edit: oops, looks like it was almost two years ago now]: > see the thread "Securely Securely deletion of an HDD" (yes, I > including my spelling mistake), which you can find online at https:// > archives.gentoo.org/gentoo-user/message/a01e0ad7b07855647a528f1e0324631a > and > https://archives.gentoo.org/gentoo-user/message/582fe3c66c7e13de979b656e9db33325. So you suggest shooting a bullet at the disks? ;-) You could also use the hammer method: https://youtu.be/oNcaIQMjbM8?t=2m55s > > > Shred overwrites the drive with random data using a few passes to > > > make recovery more difficult. Some debate whether it actually > > > adds value. > > > > For a mere mortal it is already impossible to recover data after > > writing zeros to it. Shredding is very time consuming and probably > > not worth the effort if you just want a blank drive and have no > > critical or security relevant data on it, i.e. you used it for > > testing. > > > > But while you are at it: Shredding tools should usually do a read > > check to compare that the data that ought to have been written > > actually was written, otherwise the whole procedure is pretty > > pointless. As a side effect, this exposes sector defects. > > > > If you want to do this to pretend data has never been written to the > > drive, you're probably out of luck anyways: If you'd be able to > > recover data after a single write of zeros, it should be easily > > possible to see that the data was shredded with different bit > > patterns. The S.M.A.R.T counters will add the rest and tell you the > > power-on hours, maybe even amount of data written, head moves etc. > > > > (*): On an atomic scale, that's still 1 million atoms... > > I don't think using zeros is enough, certainly not on SSDs that do > their own compression, I would think. Well, I don't think that compression and its overhead to be effective is worth the effort to implement it. I don't think drives do this. Especially that the bus speed is becoming the bottleneck. Thus to be effective, data would have to be compressed before transferring over the bus and uncompressed after. Also deduplication is very unlikely to be done in firmware. I wouldn't take that as an argument why you want use random data. But I think the point here is sector remapping (as pointed out in the references threads): SSDs do that through the FTL constantly, HDDs do that upon encountering physical problems on the platter. It absolutely makes no difference if you put random data or zero data to the disk: You won't reach the previously mapped sector locations. Secure erase is probably the only thing you can do here, hoping that it covers all sectors (also the spare sectors and unmapped sectors). > And AFAIK using random data > can still fill the drive at native write speed, so I don't see what > you gain by avoiding that. But really, if you haven't already, check > the primary sources in the thread I mentioned above. Depends on what's your random source: /dev/random won't generate entropy fast enough to do this. /dev/urandom could, but actually it's not that very random because it's generated mathematically. That somehow defeats the purpose for using as overwrite source. A mixture of both could do good enough, that's probably where special wiping software comes in. Conclusion: If you don't store state secrets, overwriting with zeros should be enough. If you store data in a high security environment, you're probably required to physically destroy the disks anyway. If you mind remapped sectors, you could use secure erase but you don't know how thoroughly that really works, thus the only other option would be to physically destroy the disk. Then there's the option to use full disk encryption right from the beginning. Still, no encryption has been unbreakable as of today. So physically destroying the disk is still the only option. Of course you can feel better when you used semi-random data to overwrite. If it's not going to slow things down, it's probably the best option to use, followed by a secure erase. But the original question was: How to bring the drive back into orignal vendor delivered state: It contained zeros, not random data. But still, it's not possible to bring it back completely to that state as SMART counters already changed. For SSD, trimming/discarding the whole device would be enough and fastest if you don't mind that your previous data may still physically be there for a while (but inaccessible thru ATA commands). -- Regards, Kai Replies to list-only preferred.
pgpGxC0kAkymG.pgp
Description: Digitale Signatur von OpenPGP