[jira] [Created] (HDFS-10920) TestStorageMover#testNoSpaceDisk is failing intermittently
Rakesh R created HDFS-10920: --- Summary: TestStorageMover#testNoSpaceDisk is failing intermittently Key: HDFS-10920 URL: https://issues.apache.org/jira/browse/HDFS-10920 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Rakesh R Assignee: Rakesh R TestStorageMover#testNoSpaceDisk test case is failing frequently in the build. References: [HDFS-Build_16890|https://builds.apache.org/job/PreCommit-HDFS-Build/16890], [HDFS-Build_16895|https://builds.apache.org/job/PreCommit-HDFS-Build/16895] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10921) TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode
Eric Badger created HDFS-10921: -- Summary: TestDiskspaceQuotaUpdate doesn't wait for NN to get out of safe mode Key: HDFS-10921 URL: https://issues.apache.org/jira/browse/HDFS-10921 Project: Hadoop HDFS Issue Type: Bug Reporter: Eric Badger Assignee: Eric Badger Test fails intermittently because the NN is still in safe mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/ [Sep 27, 2016 11:07:59 AM] (naganarasimha_gr) YARN-5660. Wrong audit constants are used in Get/Put of priority in [Sep 27, 2016 2:03:10 PM] (brahma) HDFS-10889. Remove outdated Fault Injection Framework documentaion. [Sep 27, 2016 4:29:24 PM] (iwasakims) HDFS-10426. TestPendingInvalidateBlock failed in trunk. Contributed by [Sep 27, 2016 5:02:15 PM] (arp) HDFS-10828. Fix usage of FsDatasetImpl object lock in ReplicaMap. (Arpit [Sep 27, 2016 6:26:45 PM] (wangda) HADOOP-13544. JDiff reports unncessarily show unannotated APIs and cause [Sep 27, 2016 6:54:55 PM] (wangda) YARN-3142. Improve locks in AppSchedulingInfo. (Varun Saxena via wangda) [Sep 27, 2016 9:55:28 PM] (yzhang) HDFS-10376. Enhance setOwner testing. (John Zhuge via Yongjun Zhang) [Sep 28, 2016 12:36:53 AM] (liuml07) HADOOP-13658. Replace config key literal strings with names I: hadoop [Sep 29, 2016 1:18:27 AM] (kai.zheng) Revert "HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back" This [Sep 28, 2016 2:28:41 AM] (aengineer) HDFS-10900. DiskBalancer: Complete the documents for the report command. [Sep 28, 2016 3:40:17 AM] (liuml07) HDFS-10915. Fix time measurement bug in TestDatanodeRestart. Contributed [Sep 28, 2016 4:35:06 AM] (aengineer) HDFS-9850. DiskBalancer: Explore removing references to FsVolumeSpi. -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestDFSShell hadoop.hdfs.TestRenameWhileOpen hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-compile-javac-root.txt [168K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-patch-pylint.txt [16K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [148K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [268K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [124K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/178/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/ [Sep 27, 2016 2:03:10 PM] (brahma) HDFS-10889. Remove outdated Fault Injection Framework documentaion. [Sep 27, 2016 4:29:24 PM] (iwasakims) HDFS-10426. TestPendingInvalidateBlock failed in trunk. Contributed by [Sep 27, 2016 5:02:15 PM] (arp) HDFS-10828. Fix usage of FsDatasetImpl object lock in ReplicaMap. (Arpit [Sep 27, 2016 6:26:45 PM] (wangda) HADOOP-13544. JDiff reports unncessarily show unannotated APIs and cause [Sep 27, 2016 6:54:55 PM] (wangda) YARN-3142. Improve locks in AppSchedulingInfo. (Varun Saxena via wangda) [Sep 27, 2016 9:55:28 PM] (yzhang) HDFS-10376. Enhance setOwner testing. (John Zhuge via Yongjun Zhang) [Sep 28, 2016 12:36:53 AM] (liuml07) HADOOP-13658. Replace config key literal strings with names I: hadoop [Sep 29, 2016 1:18:27 AM] (kai.zheng) Revert "HADOOP-13584. hdoop-aliyun: merge HADOOP-12756 branch back" This [Sep 28, 2016 2:28:41 AM] (aengineer) HDFS-10900. DiskBalancer: Complete the documents for the report command. [Sep 28, 2016 3:40:17 AM] (liuml07) HDFS-10915. Fix time measurement bug in TestDatanodeRestart. Contributed [Sep 28, 2016 4:35:06 AM] (aengineer) HDFS-9850. DiskBalancer: Explore removing references to FsVolumeSpi. [Sep 28, 2016 9:48:18 AM] (vvasudev) YARN-5662. Provide an option to enable ContainerMonitor. Contributed by [Sep 28, 2016 10:40:10 AM] (varunsaxena) YARN-5599. Publish AM launch command to ATS (Rohith Sharma K S via Varun -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestFileChecksum hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate hadoop.hdfs.tools.TestDFSAdminWithHA hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.timelineservice.storage.common.TestRowKeys hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters hadoop.yarn.server.timelineservice.storage.common.TestSeparator hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.mapred.TestMRIntermediateDataEncryption org.apache.hadoop.mapred.TestMROpportunisticMaps compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/artifact/out/patch-compile-root.txt [308K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/artifact/out/patch-compile-root.txt [308K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/artifact/out/patch-compile-root.txt [308K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/108/artifact/out/patch-unit-hadoop-hdfs-project
[jira] [Reopened] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity
[ https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou reopened HDFS-10824: -- > MiniDFSCluster#storageCapacities has no effects on real capacity > > > Key: HDFS-10824 > URL: https://issues.apache.org/jira/browse/HDFS-10824 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-10824-branch-2.006.patch, HDFS-10824.000.patch, > HDFS-10824.001.patch, HDFS-10824.002.patch, HDFS-10824.003.patch, > HDFS-10824.004.patch, HDFS-10824.005.patch, HDFS-10824.006.patch > > > It has been noticed MiniDFSCluster#storageCapacities has no effects on real > capacity. It can be reproduced by explicitly setting storageCapacities and > then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to > compare results. The following are storage report for one node with two > volumes after I set capacity as 300 * 1024. Apparently, the capacity is not > changed. > adminState|DatanodeInfo$AdminStates (id=6861) > |blockPoolUsed|215192| > |cacheCapacity|0| > |cacheUsed|0| > |capacity|998164971520| > |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)| > |dependentHostNames|LinkedList (id=6863)| > |dfsUsed|215192| > |hostName|"127.0.0.1" (id=6864)| > |infoPort|64222| > |infoSecurePort|0| > |ipAddr|"127.0.0.1" (id=6865)| > |ipcPort|64223| > |lastUpdate|1472682790948| > |lastUpdateMonotonic|209605640| > |level|0| > |location|"/default-rack" (id=6866)| > |maintenanceExpireTimeInMS|0| > |parent|null| > |peerHostName|null| > |remaining|20486512640| > |softwareVersion|null| > |upgradeDomain|null| > |xceiverCount|1| > |xferAddr|"127.0.0.1:64220" (id=6855)| > |xferPort|64220| > [0]StorageReport (id=6856) > |blockPoolUsed|4096| > |capacity|499082485760| > |dfsUsed|4096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6869)| > [1]StorageReport (id=6859) > |blockPoolUsed|211096| > |capacity|499082485760| > |dfsUsed|211096| > |failed|false| > |remaining|10243256320| > |storage|DatanodeStorage (id=6872)| -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-10922) Adding additional unit tests for Trash
Xiaoyu Yao created HDFS-10922: - Summary: Adding additional unit tests for Trash Key: HDFS-10922 URL: https://issues.apache.org/jira/browse/HDFS-10922 Project: Hadoop HDFS Issue Type: Sub-task Components: test Reporter: Xiaoyu Yao This ticket is opened to track adding unit tests for Trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Is anyone seeing this during trunk build?
I just noticed this during a trunk build. I was doing "mvn clean install -DskipTests". The build succeeds. Is anyone seeing this? I am using openjdk8u102. === [WARNING] Unable to process class org/apache/hadoop/hdfs/StripeReader.class in JarAnalyzer File /home1/kihwal/devel/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 18 at org.apache.bcel.classfile.Constant.readConstant(Constant.java:146) at org.apache.bcel.classfile.ConstantPool.(ConstantPool.java:67) at org.apache.bcel.classfile.ClassParser.readConstantPool(ClassParser.java:222) at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136) at org.apache.maven.shared.jar.classes.JarClassesAnalysis.analyze(JarClassesAnalysis.java:92) at org.apache.maven.report.projectinfo.dependencies.Dependencies.getJarDependencyDetails(Dependencies.java:255) at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.hasSealed(DependenciesRenderer.java:1454) at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.renderSectionDependencyFileDetails(DependenciesRenderer.java:536) at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.renderBody(DependenciesRenderer.java:263) at org.apache.maven.reporting.AbstractMavenReportRenderer.render(AbstractMavenReportRenderer.java:79) at org.apache.maven.report.projectinfo.DependenciesReport.executeReport(DependenciesReport.java:186) at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:190) at org.apache.maven.report.projectinfo.AbstractProjectInfoReport.execute(AbstractProjectInfoReport.java:202) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:414) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:357) ===
Re: Is anyone seeing this during trunk build?
I used the same command but didn't see the error you saw. Here is my environment: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0 Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00) Maven home: /Users/tyu/apache-maven-3.3.9 Java version: 1.8.0_91, vendor: Oracle Corporation Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "10.11.3", arch: "x86_64", family: "mac" FYI On Wed, Sep 28, 2016 at 3:54 PM, Kihwal Lee wrote: > I just noticed this during a trunk build. I was doing "mvn clean install > -DskipTests". The build succeeds. > Is anyone seeing this? I am using openjdk8u102. > > > > === > [WARNING] Unable to process class org/apache/hadoop/hdfs/StripeReader.class > in JarAnalyzer File /home1/kihwal/devel/apache/hadoop/hadoop-hdfs-project/ > hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar > org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in > constant pool: 18 > at org.apache.bcel.classfile.Constant.readConstant(Constant.java:146) > at org.apache.bcel.classfile.ConstantPool.(ConstantPool.java:67) > at org.apache.bcel.classfile.ClassParser.readConstantPool( > ClassParser.java:222) > at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136) > at org.apache.maven.shared.jar.classes.JarClassesAnalysis. > analyze(JarClassesAnalysis.java:92) > at org.apache.maven.report.projectinfo.dependencies.Dependencies. > getJarDependencyDetails(Dependencies.java:255) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.hasSealed(DependenciesRenderer.java:1454) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.renderSectionDependencyFileDet > ails(DependenciesRenderer.java:536) > at org.apache.maven.report.projectinfo.dependencies. > renderer.DependenciesRenderer.renderBody(DependenciesRenderer.java:263) > at org.apache.maven.reporting.AbstractMavenReportRenderer.render( > AbstractMavenReportRenderer.java:79) > at org.apache.maven.report.projectinfo.DependenciesReport. > executeReport(DependenciesReport.java:186) > at org.apache.maven.reporting.AbstractMavenReport.generate( > AbstractMavenReport.java:190) > at org.apache.maven.report.projectinfo.AbstractProjectInfoReport. > execute(AbstractProjectInfoReport.java:202) > at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo( > DefaultBuildPluginManager.java:101) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:209) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:153) > at org.apache.maven.lifecycle.internal.MojoExecutor.execute( > MojoExecutor.java:145) > at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. > buildProject(LifecycleModuleBuilder.java:84) > at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. > buildProject(LifecycleModuleBuilder.java:59) > at org.apache.maven.lifecycle.internal.LifecycleStarter. > singleThreadedBuild(LifecycleStarter.java:183) > at org.apache.maven.lifecycle.internal.LifecycleStarter. > execute(LifecycleStarter.java:161) > at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) > at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) > at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) > at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) > at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.codehaus.plexus.classworlds.launcher.Launcher. > launchEnhanced(Launcher.java:290) > at org.codehaus.plexus.classworlds.launcher.Launcher. > launch(Launcher.java:230) > at org.codehaus.plexus.classworlds.launcher.Launcher. > mainWithExitCode(Launcher.java:414) > at org.codehaus.plexus.classworlds.launcher.Launcher. > main(Launcher.java:357) > === >
How to setup local environment to run kerberos test cases.
Hi, developers I'd like to run kerberos test cases in my local machine, such as "TestSecureNameNode", but I can't make it. Can anybody tell me how to setup local environment so that those test cases can run successfully. Any help will be appreciated, thanks in advance.