[GitHub] [hadoop-thirdparty] aajisaka commented on issue #4: HADOOP-16754. port HADOOP-16754 (Fix docker failed to build yetus/hadoop) to thirdparty Dockerfile

2020-01-22 Thread GitBox
aajisaka commented on issue #4: HADOOP-16754. port HADOOP-16754 (Fix docker 
failed to build yetus/hadoop) to thirdparty Dockerfile
URL: https://github.com/apache/hadoop-thirdparty/pull/4#issuecomment-577077965
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] vinayakumarb merged pull request #4: HADOOP-16824. port HADOOP-16754 (Fix docker failed to build yetus/hadoop) to thirdparty Dockerfile

2020-01-22 Thread GitBox
vinayakumarb merged pull request #4: HADOOP-16824. port HADOOP-16754 (Fix 
docker failed to build yetus/hadoop) to thirdparty Dockerfile
URL: https://github.com/apache/hadoop-thirdparty/pull/4
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] vinayakumarb commented on issue #4: HADOOP-16824. port HADOOP-16754 (Fix docker failed to build yetus/hadoop) to thirdparty Dockerfile

2020-01-22 Thread GitBox
vinayakumarb commented on issue #4: HADOOP-16824. port HADOOP-16754 (Fix docker 
failed to build yetus/hadoop) to thirdparty Dockerfile
URL: https://github.com/apache/hadoop-thirdparty/pull/4#issuecomment-577108133
 
 
   Thanks @aajisaka for review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16824) [thirdparty] port HADOOP-16754 (Fix docker failed to build yetus/hadoop) to thirdparty Dockerfile

2020-01-22 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16824.

Fix Version/s: thirdparty-1.0.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged PR.
Thanks [~aajisaka] for review.

> [thirdparty] port HADOOP-16754 (Fix docker failed to build yetus/hadoop) to 
> thirdparty Dockerfile
> -
>
> Key: HADOOP-16824
> URL: https://issues.apache.org/jira/browse/HADOOP-16824
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: thirdparty-1.0.0
>
>
> port HADOOP-16754 to avoid Docker build failure



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-01-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/

[Jan 21, 2020 9:05:46 AM] (aajisaka) HADOOP-16808. Use forkCount and reuseForks 
parameters instead of
[Jan 21, 2020 9:56:41 AM] (iwasakims) HADOOP-16793. Redefine log level when ipc 
connection interrupted in
[Jan 21, 2020 3:59:14 PM] (kihwal) HDFS-15125. Pull back HDFS-11353, 
HDFS-13993, HDFS-13945, and HDFS-14324




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.lib.service.hadoop.TestFileSystemAccessService 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-compile-cc-root-jdk1.8.0_232.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-compile-javac-root-jdk1.8.0_232.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/574/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_232.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-bra

This week's Hadoop storage community online meetup

2020-01-22 Thread Wei-Chiu Chuang
Hi! After a bit of a hiatus, I'd like to revive the regular community sync!

As usual, this call is scheduled at US pacific time 10am 1/22/2020
(Wednesday), GMT 6pm, India 11:30pm and Beijing 2am 1/23 (Thursday)

Agenda for this week:
(1) Hadoop 3.3.0 release plan
(2) Use RocksDB to keep NameNode partial namespace in memory. -- There's a
great discussion between Baolong and Anu in the HDFS slack channel, and I'd
like to use this opportunity to discuss it

Please join via Zoom:
https://cloudera.zoom.us/j/880548968

Past meeting minutes
https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit

Feedback is a gift, and I truly value your feedback. Please let me know if
you see any opportunity to improve the communication in the community.

Weichiu


[jira] [Created] (HADOOP-16825) ITestAzureBlobFileSystemCheckAccess failing

2020-01-22 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16825:
---

 Summary: ITestAzureBlobFileSystemCheckAccess failing
 Key: HADOOP-16825
 URL: https://issues.apache.org/jira/browse/HADOOP-16825
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Bilahari T H


Tests added in HADOOP-16455 are failing.

java.lang.IllegalArgumentException: The value of property 
fs.azure.account.oauth2.client.id must not be null

Looks to me like there are new configuration options which are undocumented


# these need documentation in testing markdown file
# tests MUST downgrade to skip if not set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Hadoop 3.3 Release Plan Proposal

2020-01-22 Thread Brahma Reddy Battula
Wiki was updated for 3.3
https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-3.3.0.


>I'll move out anything that isn't needed.

thanks steve.

> We need to fix the shaded protobuf in
> Token issue to even get spark to compile.

Looks this is done. https://issues.apache.org/jira/browse/HADOOP-16621

On Wed, Jan 8, 2020 at 7:41 PM Steve Loughran 
wrote:

> >
> > 2. Features close to finish:
> >
> >
> >   *HADOOP-15620: Über-jira: S3A phase VI: Hadoop 3.3 features. (
> owner
> > :  Steve Loughran)
> >   *HADOOP-15763: Über-JIRA: abfs phase II: Hadoop 3.3 features &
> > fixes. ( owner :  Steve Loughran)
> >   *HADOOP-15619:Über-JIRA: S3Guard Phase IV: Hadoop 3.3 features. (
> > owner :  Steve Loughran)
> >
> > I'll move out anything that isn't needed.
>
> FWIW, most of these are in CDP 1.x, so there's been reasonable testing and
> I've got some provisional tuning to do. That is -if things didn't work in
> the test/production deployments, I'd know about the regressions (e.g.
> HADOOP-16751).
>
> This is S3A and ABFS code -no idea about the rest, and inevitably the big
> JAR changes will have surprises. We need to fix the shaded protobuf in
> Token issue to even get spark to compile.
>
> -Steve
>
> >
> >
>


-- 
--Brahma Reddy Battula


Re: Needs support to add more entropy to improve cryptographic randomness on Linux VMs

2020-01-22 Thread Ahmed Hussein
It turned out that INFRA was not the right place to file.
I created another one HADOOP-16810
 suggesting a quick fix
for docker.
Basically, it requires passing "-v /dev/urandom:/dev/random" to the docker
that runs the pre-commit tests. Any idea how to do that?


On Wed, Jan 15, 2020 at 11:17 AM Ahmed Hussein  wrote:

> I filed a new Jira INFRA-19730
> 
>
> On Wed, Jan 15, 2020 at 10:58 AM Ahmed Hussein  wrote:
>
>>
>> For testing, it is important to be able to use Random instead of
>>> SecureRandom. We should never be using the OS entropy for unit tests.
>>>
>>
>> Is this feasible to do? I assume that going that road implies scanning
>> through all objects and classes and override the methods to consume
>> "java.util.Random" instead of "SecureRandom". For example, the JVM may get
>> stuck initializing "KeyGenerator", and changing that to Random seems a
>> painful task.
>>
>> urandom is a perfectly acceptable option. I am not sure who maintains the
>>> pre-commit machines these days.
>>>
>>
>> Another thing to put into consideration is that changing the Java config
>> will only fix the problem for Java, but the Docker image may get stuck with
>> a similar issue in native or SSL.
>> This makes installing the rng service the reliable way to make the docker
>> image stable.
>>
>>
>>
>>> On Tue, Jan 14, 2020 at 1:05 PM Ahmed Hussein  wrote:
>>>
 I tried setting JVM option  "-Djava.security.egd=file:///dev/./urandom"
 from the command line but it did not work for me.
 In my case, I assume the JVM ignores  "java.security.egd" because it is
 considered a security thing.

 On Tue, Jan 14, 2020 at 1:27 PM István Fajth  wrote:

 > Hi,
 >
 > based on this article, we might not need infra to do this, but we can
 > specify the /dev/urandom as a random source directly to the JVM for
 test
 > runs with the option: -Djava.security.egd=file:///dev/urandom.
 >
 >
 https://security.stackexchange.com/questions/14386/what-do-i-need-to-configure-to-make-sure-my-software-uses-dev-urandom
 >
 > Pifta
 >
 > Sean Busbey  ezt írta (időpont: 2020.
 jan.
 > 14., K, 20:14):
 >
 >> You should file an INFRA jira asking about this. They can get in
 touch
 >> with the folks who maintain the Hadoop labeled nodes.
 >>
 >> On Tue, Jan 14, 2020 at 12:42 PM Ahmed Hussein 
 wrote:
 >> >
 >> > Hi,
 >> >
 >> > I was investigating a JUnit test
 >> > (MAPREDUCE-7079:TestMRIntermediateDataEncryption
 >> > is failing in precommit builds
 >> > ) that was
 >> > consistently hanging on Linux VMs and failing Mapreduce pre-builds.
 >> > I found that the test hangs slows or hangs indefinitely whenever
 Java
 >> reads
 >> > the random file.
 >> >
 >> > I explored two different ways to get that test case to work
 properly on
 >> my
 >> > local Linux VM running rel7:
 >> >
 >> >1. To install "haveged" and "rng-tools" on the virtual machine
 >> running
 >> >Rel7. Then, start rngd service {{sudo service rngd start}} .
 >> >2. Change java configuration to load urandom
 >> >{{sudo vim $JAVA_HOME/jre/lib/security/java.security}}
 >> >## Change the line “securerandom.source=file:/dev/random” to
 read:
 >> >securerandom.source=file:/dev/./urandom
 >> >
 >> >
 >> > Is it possible to apply any of the above solutions to the VM that
 runs
 >> the
 >> > precommit builds?
 >>
 >>
 >>
 >> --
 >> busbey
 >>
 >> -
 >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
 >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
 >>
 >>
 >
 > --
 > Pifta
 >

>>>


[jira] [Created] (HADOOP-16826) ABFS: update abfs.md to include config keys for identity transformation

2020-01-22 Thread Da Zhou (Jira)
Da Zhou created HADOOP-16826:


 Summary: ABFS: update abfs.md to include config keys for identity 
transformation
 Key: HADOOP-16826
 URL: https://issues.apache.org/jira/browse/HADOOP-16826
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2
Reporter: Da Zhou


Update the abfs.md to include the config keys for identity transformation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org