Upgraded Tez dependency to hadoop 3.0.3 and found this issue. Anyone else
seeing this issue?
[ERROR] Failed to execute goal on project hadoop-shim: Could not resolve
dependencies for project org.apache.tez:hadoop-shim:jar:0.10.0-SNAPSHOT:
Failed to collect dependencies at
org.apache.hadoop:hadoop-
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/812/
[Jun 14, 2018 4:58:50 PM] (inigoiri) HDFS-13563. TestDFSAdminWithHA times out
on Windows. Contributed by
[Jun 14, 2018 7:54:21 PM] (eyang) YARN-8410. Fixed a bug in A record lookup by
CNAME record.
Steve Loughran created HADOOP-15546:
---
Summary: ABFS: tune imports & javadocs
Key: HADOOP-15546
URL: https://issues.apache.org/jira/browse/HADOOP-15546
Project: Hadoop Common
Issue Type: Sub
Steve Loughran created HADOOP-15545:
---
Summary: ABFS initialize() throws string out of bounds exception
of the URI isn't fully qualified
Key: HADOOP-15545
URL: https://issues.apache.org/jira/browse/HADOOP-15545
yep!
I'll walk through how to find it, skip to "tl;dr:" if you just want the answer.
Start with the "Console output" line in the footer of the QABot post
Console output
https://builds.apache.org/job/PreCommit-HADOOP-Build/14777/console
Search the output for "Checking client artifacts". There'll
For more details, see https://builds.apache.org/job/hadoop-trunk-win/498/
[Jun 13, 2018 4:50:10 PM] (aengineer) HDDS-109. Add reconnect logic for
XceiverClientGrpc. Contributed by
[Jun 13, 2018 6:43:18 PM] (xyao) HDDS-159. RestClient: Implement list
operations for volume, bucket and
[Jun 13, 201
Steve Loughran created HADOOP-15544:
---
Summary: ABFS: validate packing, transient classpath, hadoop fs CLI
Key: HADOOP-15544
URL: https://issues.apache.org/jira/browse/HADOOP-15544
Project: Hadoop Com
There's a patch for https://issues.apache.org/jira/browse/HADOOP-15407 which is
being rejected due to unrelated tests (probably) and to s failure in the shading
Is there a way to get the output of that specific log?
-steve
S3 bucket. We have the key
> 's3://mybucket/d1/d2/d3/d4/d5/d6/d7' in s3 (d7 being a text file). We also
> have keys
> 's3://mybucket/d1/d2/d3/d4/d5/d6/d7/d8/d9/part_dt=20180615/a.parquet'
> (a.parquet being a file)
> When we run a spark job to write b.parquet
Sebastian Nagel created HADOOP-15543:
Summary: IndexOutOfBoundsException when reading bzip2-compressed
SequenceFile
Key: HADOOP-15543
URL: https://issues.apache.org/jira/browse/HADOOP-15543
Projec
10 matches
Mail list logo