help needed to test a patch for Java7 on OSX
There's now a patch (and instructions of a manual stage) to get hadoop to build on java7 on OSX before I commit it, I need to make sure it doesn't break on any other platforms. Can anyone with Linux, Windows, PowerPC, Arm, whatever (and free time, obviously) and an openjdk7 or closed-jdk7, apply the patch and make sure the build still works https://issues.apache.org/jira/browse/HADOOP-9350 -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Created] (HADOOP-9949) Adding token credentials to UGI via JAAS login module
Kai Zheng created HADOOP-9949: - Summary: Adding token credentials to UGI via JAAS login module Key: HADOOP-9949 URL: https://issues.apache.org/jira/browse/HADOOP-9949 Project: Hadoop Common Issue Type: Improvement Components: security Reporter: Kai Zheng Assignee: Kai Zheng This proposes adding token credentials to UGI via JAAS login module, which would be pluggable, flexible, secure and more consistent with JAAS approach. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9952) Create secured MiniClusters
Kai Zheng created HADOOP-9952: - Summary: Create secured MiniClusters Key: HADOOP-9952 URL: https://issues.apache.org/jira/browse/HADOOP-9952 Project: Hadoop Common Issue Type: New Feature Components: security Reporter: Kai Zheng This proposes to create secured Mini*Clusters by incorporating MiniKdc to test in Kerberos mode. Would be good to provide common wrapper of MiniKdc with useful facilities for kinds of Mini*Clusters like MiniDFSCluster, MiniYARNCluster and etc, and such wrapper should be very easy to be incorporated into target Mini*Cluster. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9951) Add SASL mechanism for TokenAuthn method
Kai Zheng created HADOOP-9951: - Summary: Add SASL mechanism for TokenAuthn method Key: HADOOP-9951 URL: https://issues.apache.org/jira/browse/HADOOP-9951 Project: Hadoop Common Issue Type: New Feature Reporter: Kai Zheng Assignee: Kai Zheng As part of HADOOP-9804, this adds a new SASL mechanism for TokenAuthn method, including necessary SASL client and SASL server with corresponding callbacks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9950) Add TokenAuthn authentication method in UGI
Kai Zheng created HADOOP-9950: - Summary: Add TokenAuthn authentication method in UGI Key: HADOOP-9950 URL: https://issues.apache.org/jira/browse/HADOOP-9950 Project: Hadoop Common Issue Type: New Feature Reporter: Kai Zheng Assignee: Kai Zheng As part of HADOOP-9804, this will add TokenAuthn method in UGI and allow the method to be configured for Hadoop and the ecosystem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: help needed to test a patch for Java7 on OSX
Patch built fine for me on Windows Server 2008 + JDK7. On Wed, Sep 11, 2013 at 1:51 AM, Steve Loughran wrote: > There's now a patch (and instructions of a manual stage) to get hadoop to > build on java7 on OSX > > before I commit it, I need to make sure it doesn't break on any other > platforms. > > Can anyone with Linux, Windows, PowerPC, Arm, whatever (and free time, > obviously) and an openjdk7 or closed-jdk7, apply the patch and make sure > the build still works > > https://issues.apache.org/jira/browse/HADOOP-9350 > > -- > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity to > which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. > -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Created] (HADOOP-9953) Improve RPC server throughput
Daryn Sharp created HADOOP-9953: --- Summary: Improve RPC server throughput Key: HADOOP-9953 URL: https://issues.apache.org/jira/browse/HADOOP-9953 Project: Hadoop Common Issue Type: Improvement Components: ipc Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Bottlenecks in the RPC layer are in part holding back the performance of the NN. Even under very heavy load, the NN usually can't saturate more than a few cores even with load patterns dominated by read ops. This will be an umbrella for issues discovered. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9955) RPC idle connection closing is extremely inefficient
Daryn Sharp created HADOOP-9955: --- Summary: RPC idle connection closing is extremely inefficient Key: HADOOP-9955 URL: https://issues.apache.org/jira/browse/HADOOP-9955 Project: Hadoop Common Issue Type: Sub-task Components: ipc Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp The RPC server listener loops accepting connections, distributing the new connections to socket readers, and then conditionally & periodically performs a scan for idle connections. The idle scan choses a _random index range_ to scan in a _synchronized linked list_. With 20k+ connections, walking the range of indices in the linked list is extremely expensive. During the sweep, other threads (socket responder and readers) that want to close connections are blocked, and no new connections are being accepted. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9954) Hadoop 2.0.5 doc build failure - OutOfMemoryError exception
Paul Han created HADOOP-9954: Summary: Hadoop 2.0.5 doc build failure - OutOfMemoryError exception Key: HADOOP-9954 URL: https://issues.apache.org/jira/browse/HADOOP-9954 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 2.0.5-alpha Reporter: Paul Han When run hadoop build with command line options: {code} mvn package -Pdist,native,docs -DskipTests -Dtar {code} Build failed adn OutOfMemoryError Exception is thrown: {code} [INFO] --- maven-source-plugin:2.1.2:test-jar (default) @ hadoop-hdfs --- [INFO] [INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default) @ hadoop-hdfs --- [INFO] ** FindBugsMojo execute *** [INFO] canGenerate is true [INFO] ** FindBugsMojo executeFindbugs *** [INFO] Temp File is /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/findbugsTemp.xml [INFO] Fork Value is true [java] Out of memory [java] Total memory: 477M [java] free memory: 68M [java] Analyzed: /var/lib/jenkins/workspace/Hadoop-Client-2.0.5-T-RPM/rpms/hadoop-devel.x86_64/BUILD/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/classes [java] Aux: /home/henkins-service/.m2/repository/org/codehaus/mojo/findbugs-maven-plugin/2.3.2/findbugs-maven-plugin-2.3.2.jar [java] Aux: /home/henkins-service/.m2/repository/com/google/code/findbugs/bcel/1.3.9/bcel-1.3.9.jar ... [java] Aux: /home/henkins-service/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar [java] Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded [java] at java.util.HashMap.(HashMap.java:226) [java] at edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefSet.(UnconditionalValueDerefSet.java:68) [java] at edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:650) [java] at edu.umd.cs.findbugs.ba.deref.UnconditionalValueDerefAnalysis.createFact(UnconditionalValueDerefAnalysis.java:82) [java] at edu.umd.cs.findbugs.ba.BasicAbstractDataflowAnalysis.getFactOnEdge(BasicAbstractDataflowAnalysis.java:119) [java] at edu.umd.cs.findbugs.ba.AbstractDataflow.getFactOnEdge(AbstractDataflow.java:54) [java] at edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.examineNullValues(NullDerefAndRedundantComparisonFinder.java:297) [java] at edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.execute(NullDerefAndRedundantComparisonFinder.java:150) [java] at edu.umd.cs.findbugs.detect.FindNullDeref.analyzeMethod(FindNullDeref.java:278) [java] at edu.umd.cs.findbugs.detect.FindNullDeref.visitClassContext(FindNullDeref.java:205) [java] at edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68) [java] at edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:979) [java] at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:230) [java] at edu.umd.cs.findbugs.FindBugs.runMain(FindBugs.java:348) [java] at edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1057) [java] Java Result: 1 [INFO] No bugs found {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Need help for building haddop source on windows 7
Hi Ranjan, I just noticed this thread. Please use 'trunk' or 'branch-2' to build Hadoop 2.0 on Windows. Instructions are in BUILDING.txt. Let us know if you get stuck on any step. Arpit On Tue, Sep 10, 2013 at 8:23 AM, Vinayakumar B wrote: > Hi.. > As Kai told you can build trunk or branch 2.1 in windows with native > support. > > Please check Readme > > You may need to install > 1. Windows 7 SDK > 2. .NET framework 4 > > Regards, > Vinayakumar B > On Sep 5, 2013 3:45 PM, "Zheng, Kai" wrote: > > > Just updated by Harsh J. Thanks. > > > > The trunk-win branch has long since been merged into active trunk, please > > don't use it anymore. We have Windows instructions in trunk and branch-2 > > README. Sorry for any convenience. > > > > Regards, > > Kai > > > > -Original Message- > > From: Zheng, Kai > > Sent: Thursday, September 05, 2013 4:02 PM > > To: common-dev@hadoop.apache.org > > Subject: RE: Need help for building haddop source on windows 7 > > > > Perhaps you could try the 'branch-trunk-win' branch, and look for some > > building instructions in it. Not sure this works or not, though. > > > > -Original Message- > > From: Ranjan Dutta [mailto:rdbmsdata.ran...@gmail.com] > > Sent: Thursday, September 05, 2013 3:43 PM > > To: common-dev@hadoop.apache.org > > Subject: Need help for building haddop source on windows 7 > > > > Hi , > > > > I want to build hadoop source on Windows 7 . Can anybody share a > doucument > > related to source build. > > > > Thanks > > Ranjan > > > -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Created] (HADOOP-9956) RPC listener inefficiently assigns connections to readers
Daryn Sharp created HADOOP-9956: --- Summary: RPC listener inefficiently assigns connections to readers Key: HADOOP-9956 URL: https://issues.apache.org/jira/browse/HADOOP-9956 Project: Hadoop Common Issue Type: Sub-task Components: ipc Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp The socket listener and readers use a complex synchronization to update the reader's NIO {{Selector}}. Updating active selectors is not thread-safe so precautions are required. However, the current locking choreography results in a serialized distribution of new connections to the parallel socket readers. A slower/busier reader can stall the listener and throttle performance. The problem manifests as unexpectedly low cpu utilization by the listener and readers (~20-30%) under heavy load. The call queue is shallow when it should be overflowing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9957) UserGroupInformation.checkTGTAndReloginFromKeytab() do the same thing as method reloginFromKeytab()
Aimee Cheng created HADOOP-9957: --- Summary: UserGroupInformation.checkTGTAndReloginFromKeytab() do the same thing as method reloginFromKeytab() Key: HADOOP-9957 URL: https://issues.apache.org/jira/browse/HADOOP-9957 Project: Hadoop Common Issue Type: Wish Components: security Reporter: Aimee Cheng The methods checkTGTAndReloginFromKeytab() and reloginFromKeytab() in UserGroupInformation actually are do the same things. Now reloginFromKeytab() will check the TGT expire time, if fresh, then will not relogin, just as what checkTGTAndReloginFromKeytab() does. I suggest maybe we can still let reloginFromKeytab() not check the TGT and provide a way to let develop can control when to relogin. While maybe we can just remove the checkTGTAndReloginFromKeytab() method. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira