John:
At first glance yes. But how were the records sequenced? ie
100,100,100,100 etc or were they randomly placed?
Ed
On Jan 16, 2015, at 12:37 PM, John McKown wrote:
http://opensource.com/business/15/1/apache-spark-new-world-record
<quote>
In October 2014, Databricks participated in the Sort Benchmark and
set a
new world record for sorting 100 terabytes (TB) of data, or 1 trillion
100-byte records. The team used Apache Spark <http://
spark.apache.org/> on
207 EC2 virtual machines and sorted 100 TB of data in 23 minutes.
</quote>
Impressive to me.
--
While a transcendent vocabulary is laudable, one must be eternally
careful
so that the calculated objective of communication does not become
ensconced
in obscurity. In other words, eschew obfuscation.
111,111,111 x 111,111,111 = 12,345,678,987,654,321
Maranatha! <><
John McKown
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN