> From: Richard Elling [mailto:richard.ell...@gmail.com]
> 
> > Also, the concept of "faster tracks of the HDD" is also incorrect.  Yes,
> > there was a time when HDD speeds were limited by rotational speed and
> > magnetic density, so the outer tracks of the disk could serve up more
data
> > because more magnetic material passed over the head in each rotation.
> But
> > nowadays, the hard drive sequential speed is limited by the head speed,
> > which is invariably right around 1Gbps.  So the inner and outer sectors
of
> > the HDD are equally fast - the outer sectors are actually less
magnetically
> > dense because the head can't handle it.  And the random IO speed is
> limited
> > by head seek + rotational latency, where seek is typically several times
> > longer than latency.
> 
> Disagree. My data, and the vendor specs, continue to show different
> sequential
> media bandwidth speed for inner vs outer cylinders.

Any reference?  I know, as I sit and dd from some disk | pv > /dev/null, it
will tell me something like 1.0Gbps...  I periodically check its progress
while it's in progress, and while it varies a little (say, sometimes 1.0,
1.1, 1.2) it goes up and down throughout the process.  There is no
noticeable difference between the early, mid, and late behavior,
sequentially reading the whole disk.

If the performance of the outer tracks is better than the performance of the
inner tracks due to limitations of magnetic density or rotation speed (not
being limited by the head speed or bus speed), then the sequential
performance of the drive should increase as a square function, going toward
the outer tracks.  c = pi * r^2

It is my belief, based on specs I've previously looked at, that mfgrs break
the drive down into zones.  So, something like the inner 20% of the tracks
will have magnetic layout pattern A, and the next 20% will have magnetic
layout pattern B, and so forth...  Within a single magnetic layout pattern,
jumping from individual track to individual track can yield a difference of
performance, but it's not a huge step from one to the next.  And when you
transition from layout pattern to layout pattern, the pattern just repeats
itself again.  They're trying to optimize, to a first order, ensure the
performance limitations are mostly caused by head and/or bus speed.  If
those are the bottlenecks, let them be the bottlenecks, and at least solve
all the other problems that are solvable.

So, small variations of sequential performance are possible, jumping from
track to track, but based on what I've seen, the maximum performance
difference from the absolute slowest track to the absolute fastest track
(which may or may not have any relation to inner vs outer) ... maximum
variation on-par with 10% performance difference.  Not a square function.


> OTOH, you're not trying to get high performance from an HDD are you?  That
> game is over.

Lots of us still have to live with HDD's, due to capacity and cost
requirements.  We accept a relative definition of "high performance," and
still want to get all the performance we can out of whatever device we're
using.  Even if there exists a faster device somewhere in the world.

Also, for sequential performance, HDD's are on-par with, and often better
than SSD's.  (For now.)  While many SSD's publish specs including something
like "220 MB/s" which is higher than HDD's can reach...  SSD's publish their
maximum performance, which is not typical performance.  After you use them
for a month, they slow down.  Often to half or worse, of the speed they
originally were able to run.  Which is... as I say...  on-par with, or worse
than, the sequential speed of an HDD.

Even crappy SSD's can have random IO worse than HDD's.  Just benchmark any
high-cost top-tier USB3 flash memory stick, and you'll see what I mean.  ;-)
The only SSD's that are faster than HDD's in any way are *actual* internal
sas/sata/etc SSD's, which are faster than HDD in terms of random IOPS and
maybe sequential.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to