Re: Hadoop QA fails with "Docker failed to build yetus/hadoop:a9ad5d6"

2017-04-14 Thread Arun Suresh
Thanks Allen.. On Apr 14, 2017 3:29 PM, "Allen Wittenauer" wrote: > > > On Apr 13, 2017, at 11:13 PM, Arun Suresh wrote: > > > > Yup, > > > > YARN Pre-Commit tests are having the same problem as well. > > Is there anything that can be done to fix this ? Ping Yetus folks (Allen > / > > Sean) > >

Re: Hadoop QA fails with "Docker failed to build yetus/hadoop:a9ad5d6"

2017-04-14 Thread Allen Wittenauer
> On Apr 13, 2017, at 11:13 PM, Arun Suresh wrote: > > Yup, > > YARN Pre-Commit tests are having the same problem as well. > Is there anything that can be done to fix this ? Ping Yetus folks (Allen / > Sean) https://issues.apache.org/jira/browse/HADOOP-14311 --

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-04-14 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/376/ -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shell

[jira] [Created] (HDFS-11656) RetryInvocationHandler may report ANN as SNN in messages.

2017-04-14 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-11656: Summary: RetryInvocationHandler may report ANN as SNN in messages. Key: HDFS-11656 URL: https://issues.apache.org/jira/browse/HDFS-11656 Project: Hadoop HDFS

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-04-14 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/288/ -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are con

Hadoop s3 integration for Spark

2017-04-14 Thread Afshin, Bardia
Hello community. I’m considering consuming s3 objects via Hadoop via s3a protocol. The main purpose of this is to utilize Spark to access s3, and it seems like the only formal protocol / integration for doing so is Hadoop. The process that I am implementing is rather formal and straight forward