On 22/05/2020 16:43, Rich Freeman wrote:
On Fri, May 22, 2020 at 11:32 AM Michael <confabul...@kintzios.com> wrote:
An interesting article mentioning WD Red NAS drives which may actually be SMRs
and how latency increases when cached writes need to be transferred into SMR
blocks.
Yeah, there is a lot of background on this stuff.
You should view a drive-managed SMR drive as basically a journaled
filesystem/database masquerading as a virtual drive. One where the
keys/filenames are LBAs, and all the files are 512 bytes long. :)
Really even most spinning drives are this way due to the 4k physical
sectors, but this is something much easier to deal with and handled by
the OS with aligned writes as much as possible. SSDs have similar
issues but again the impact isn't nearly as bad and is more easily
managed by the OS with TRIM/etc.
A host-managed SMR drive operates much more like a physical drive, but
in this case the OS/application needs to be SMR-aware for performance
not to be absolutely terrible.
What puzzles me (or rather, it doesn't, it's just cost cutting), is why
you need a *dedicated* cache zone anyway.
Stick a left-shift register between the LBA track and the hard drive,
and by switching this on you write to tracks 2,4,6,8,10... and it's a
CMR zone. Switch the register off and it's an SMR zone writing to all
tracks.
The other thing is, why can't you just stream writes to a SMR zone,
especially if we try and localise writes so lets say all LBAs in Gig 1
go to the same zone ... okay - if we run out of zones to re-shingle to,
then the drive is going to grind to a halt, but it will be much less
likely to crash into that barrier in the first place.
Even better, if we have two independent heads, we could presumably
stream updates using one head, and re-shingle with the other. But that's
more cost ...
Cheers,
Wol