Re: Minutes: Hadoop Contributor Meeting 05/06/2010
Not sure why my attachment didn't make it to the list. Anyway, I've posted Arun's notes on the wiki at http://wiki.apache.org/hadoop/HadoopContributorsMeeting20100506, and included the content of my slide there. (Attachments on the wiki have been disabled - as of today apparently, see SVN commit r775220 - so I wasn't able to post the slide there either.) Tom On Fri, May 7, 2010 at 9:36 AM, Tom White wrote: > Here's my (single) slide about the 0.21 release. > > Tom > > On Thu, May 6, 2010 at 5:38 PM, Arun C Murthy wrote: >> # Shared goals >> - Hadoop is HDFS & Map-Reduce in this context of this set of slides >> # Priorities >> * Yahoo >> - Correctness >> - Availability: Not the same as high-availability (6 9s. etc.) i.e. SPOFs >> - API Compatibility >> - Scalability >> - Operability >> - Performance >> - Innovation >> * Cloudera >> - Test coverage, api coverage >> - APL Licensed codec (lzo replacement) >> - Security >> - Wire compatibility >> - Cluster-wide resource availability >> - New apis (FileContext, MR Context Objs.), documentation of their >> advantages >> - HDFS to better support non-MR use-cases >> - Cluster metrics hooks >> - MR modularity (package) >> * Facebook >> - Correctness >> - Availability, High Availability, Failover, Continuous Availability >> - Scalability >> # Bar for patches/features keeps going higher as the project matures >> - Build consensus (e.g. Python Enhancement Process, JSR etc.) >> - Run/test on your own to prove the concept/feature or branch and finish >> - Early versions of libraries should be started outside of the project >> (github etc.) e.g. input-formats, compression-codecs etc. >> - github for all the above >> - Prune contrib >> # Maven for packaging >> # Tom: hadoop-0.21 (Tom - can you please post your slides? Thanks!) >> # Owen: Release Manager (see slides) >> # Agenda for next meeting >> - Eli: Hadoop Enhancement Process (modelled on PEP?) >> - Branching strategies: Development Models >> >> Arun >> >> >> >> >> >
[jira] Created: (HDFS-1138) Modification times are being overwritten when FSImage loads
Modification times are being overwritten when FSImage loads --- Key: HDFS-1138 URL: https://issues.apache.org/jira/browse/HDFS-1138 Project: Hadoop HDFS Issue Type: Bug Reporter: Dmytro Molkov A very easy way to spot the bug is to do a second restart in TestRestartDFS and check that the modification time on root is the same as it was before the second restart. The problem is modifying time of the parent if the modification time of the child is greater than parent's in addToParent. So if you have /DIR/File then on creation of a file modification time of the DIR will be set, but on cluster restart, or when secondary is checkpointing and reading the image it will add DIR to "/" and write the new modification time for "/" which is the modification time of DIR. This is clearly a bug. I will attach a patch with one more parameter being passed from the loadFSImage that says to not propagate the time. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1139) Forward-port TestFileAppend4 tests from 20-append branch
Forward-port TestFileAppend4 tests from 20-append branch Key: HDFS-1139 URL: https://issues.apache.org/jira/browse/HDFS-1139 Project: Hadoop HDFS Issue Type: Test Components: data-node, hdfs client, name-node, test Affects Versions: 0.21.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.21.0 In working on the append fixes for branch 20 we've added a number of test, several of which expose bugs in the trunk append as well. This issue is to forward port all of the tests - it's easier to do that in one patch and then open separate JIRAs to fix each one that represents a trunk bug. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1140) Speedup INode.getPathComponents
Speedup INode.getPathComponents --- Key: HDFS-1140 URL: https://issues.apache.org/jira/browse/HDFS-1140 Project: Hadoop HDFS Issue Type: Bug Reporter: Dmytro Molkov When the namenode is loading the image there is a significant amount of time being spent in the DFSUtil.string2Bytes. We have a very specific workload here. The path that namenode does getPathComponents for shares N - 1 component with the previous path this method was called for (assuming current path has N components). Hence we can improve the image load time by caching the result of previous conversion. We thought of using some simple LRU cache for components, but the reality is, String.getBytes gets optimized during runtime and LRU cache doesn't perform as well, however using just the latest path components and their translation to bytes in two arrays gives quite a performance boost. I could get another 20% off of the time to load the image on our cluster (30 seconds vs 24) and I wrote a simple benchmark that tests performance with and without caching. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1141) completeFile does not check lease ownership
completeFile does not check lease ownership --- Key: HDFS-1141 URL: https://issues.apache.org/jira/browse/HDFS-1141 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.21.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Blocker completeFile should check that the caller still owns the lease of the file that it's completing. This is for the 'testCompleteOtherLeaseHoldersFile' case in HDFS-1139. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1142) Lease recovery doesn't reassign lease when triggered by append()
Lease recovery doesn't reassign lease when triggered by append() Key: HDFS-1142 URL: https://issues.apache.org/jira/browse/HDFS-1142 Project: Hadoop HDFS Issue Type: Bug Components: name-node Affects Versions: 0.21.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Blocker If a soft lease has expired and another writer calls append(), it triggers lease recovery but doesn't reassign the lease to a new owner. Therefore, the old writer can continue to allocate new blocks, try to steal back the lease, etc. This is for the testRecoveryOnBlockBoundary case of HDFS-1139 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.