+1 (non-binding)
 
I downloaded the release candidate 2 onto a 12-nod cluster performed the 
following tests:
 
##################################################################
Create files using S-Live:
     hadoop --config $HADOOP_CONF_DIR org.apache.hadoop.fs.slive.SliveTest 
-appendSize 1,67108864 -append 0,uniform -baseDir /user/$USER/S-Live -blockSize 
67108864,67108864 -create 100,uniform -delete 0,uniform -dirSize 16 -duration 
300 -files 1024 -ls 0,uniform -maps 20 -mkdir 0,uniform -ops 10000 -packetSize 
65536 -readSize 1,4294967295 -read 0,uniform -reduces 5 -rename 0,uniform 
-replication 1,3 -resFile $outFile -seed 12345678 -sleep 100,1000 -writeSize 
1,67108864
     Output:
Basic report for operation type CreateOp
-------------
Measurement "bytes_written" = 32046140254
Measurement "milliseconds_taken" = 233234
Measurement "op_count" = 7929
Rate for measurement "bytes_written" = 131.031 MB/sec
Rate for measurement "op_count" = 33.996 operations/sec

##########################################################
 
Do random deletes, reads, mkdirs, lists, and renames using S-Live:
    hadoop --config $HADOOP_CONF_DIR org.apache.hadoop.fs.slive.SliveTest 
-appendSize 1,67108864 -append 0,uniform -baseDir /user/$USER/S-Live -blockSize 
67108864,67108864 -create 0,uniform -delete 20,uniform -dirSize 16 -duration 
300 -files 1024 -ls 20,uniform -maps 20 -mkdir 20,uniform -ops 10000 
-packetSize 65536 -readSize 1,4294967295 -read 20,uniform -reduces 5 -rename 
20,uniform -replication 1,3 -resFile $outFile -seed 12345678 -sleep 100,1000 
-writeSize 1,67108864
     Output:
Basic report for operation type DeleteOp 
-------------
Measurement "milliseconds_taken" = 6008007
Measurement "op_count" = 27148
Rate for measurement "op_count" = 4.519 operations/sec
-------------

#########################################################################
 
     I also ran randomwriter.
 
Thanks,
-Eric Payne
 
 
------ Forwarded Message
From: Owen O'Malley <o...@hortonworks.com>
Reply-To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
Date: Thu, 18 Aug 2011 00:28:20 -0700
To: Owen O'Malley <o...@hortonworks.com>
Cc: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
Subject: Re: [VOTE] Should we release 0.20.204.0rc2?


On Aug 9, 2011, at 8:55 AM, Owen O'Malley wrote:

> All,
>  Matt rolled a 0.20.204.0rc1, but I think it got lost in the previous vote
thread. Unfortunately, it had the version as 0.20.204 and didn't update the
release notes. I've updated it, run the regression tests and I think we should
release it. I've put the tarball up at:
> 
> http://people.apache.org/~omalley/hadoop-0.20.204.0-rc2

This vote is still running with no votes other than mine.

I've tested with and without security on a 60 node cluster and I'm seeing
some failures, but not that many. On a terasort with 15,000 maps and 200
reduces, I ran the following cases:

security + linux task controller : 2 failures (both mr-2651)

no security + default task controller : 6-7 failures (seems to be a race
condition in clean up)

Even in the no security case, it is only losing 0.05% of the time.

It isn't perfect, but this is the code that Yahoo is currently running. I
think we should release it.

-- Owen

------ End of Forwarded Message

Reply via email to