Clearly a lot of people don't understand latency, so I'll talk about latency, 
breaking it down in simpler components.

Sometimes it helps to use made up numbers, to simplify a point.

Imagine a non-real system that had these 'ridiculous' performance 
characteristics:

The system has a 60 second (1 minute) read latency.
The system can scale dramatically, it can do 60 billion IO's per minute.

Now some here are arguing about the term latency, but its rather a simple term.
It simply means the amount of time it takes, for data to move from one point to 
another.

And some here have argued there is no good measurement of latency, but also it 
very simple.
It is measured in time units.

OK, so we have a latency of 1 minute, in this 'explanatory' system.

That means, I issued a read request, the Flash takes 1 minute to return the 
data requested to the program.

But remember, this example system, has massive parallel scalability.

I issue 2 read requests, both read requests return after 1 minute.
I issue 3 read requests, all 3 return after 1 minute.

I defined this made up system, as one, such that if you issue 60 billion read 
requests, they all return, simultaneously, after 1 minute.

Let's do some math.

60,000,000,000 divided by 60 seconds, well this system does 1 billion IOPS!

Wow, what wouldn't run fast with 1 billion IOPS?

The answer, is, most programs would not, not with such a high latency as 
waiting 1 minute for data to return.  Most apps wouldn't run acceptably, no not 
at all.

Imagine you are in Windows, or Solaris, or Linux, and every time you needed to 
go to disk, a 1 minute wait.  Wow, it would be totally unacceptable, despite 
the IOPS, latency matters.

Certain types of apps wouldn't be latency sensitive, some people would love to 
have this 1 billion IOPs system :)

The good news is, the F20 latency, even if we don't know what it is, is 
certainly not 1 minute, and we can speculate, it is much better than 
traditional rotating disks.  But lets blue sky this, and make up a number, say 
.41ms (410 microseconds).   And lets say you have a competitor at .041ms (41 
microseconds).  When would the competitor have a real advantage?  Well, if it 
was an app that issued a read, waited for the results, issued a read, waited 
for the results, and say, did this 100 million times or so, then, yes, that low 
latency card is going to help accelerate that app.  Computers are fast, they 
deal with a lot of data, real world -and a surprising lot, doesn't scale.   
I've seen sales and financial apps do 100 million io's and more. Even a Sun 
blogger, I read recently, did an article about the F20 in terms of  how, 
compared to traditional disks, it speeds up Peoplesoft jobs. 

flash has lower latency than traditional disks, that's part of what makes it 
competitive...and by the same token, flash with lower latency than other flash, 
has a competitive advantage.

Some here say latency (that wait times) doesn't matter with flash.  That 
latency (waiting) only matters with traditional hard drives.

Uhm, who told you that?  I've never heard someone make that case before, 
anywhere, ever.

And lets give you credit and say you had some minor point to make about hdd and 
flash differences...still you are using it in such a way, that someone could 
draw the wrong conclusion, so..... clarify this point, you are certainly not 
suggesting that higher wait times speeds up an application, correct?

Or that the F20's latency cannot impact performace, right?  C'mon, some common 
sense? anyone?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to