+1
* Signature: ok
* Checksum : ok
* Rat check (1.8.0_191): ok
- mvn clean apache-rat:check
* Built from source (1.8.0_191): ok
- mvn clean install -DskipTests
Ran a ten node cluster w/ hbase on top running its verification loadings w/
(ge
+1 (binding)
* Signature: ok
* Checksum : ok
* Rat check (1.8.0_191): ok
- mvn clean apache-rat:check
* Built from source (1.8.0_191): ok
- mvn clean install -DskipTests
Poking around in the binary, it looks good. Unpacked site. Looks right.
Chec
+1
Verified checksums, signatures, and rat-check are good.
Built (RC4) locally from source and ran a small hdfs cluster with hbase on
top. Ran an hbase upload w/ chaos and verification and hdfs seemed to do
the right thing.
S
On Mon, Feb 21, 2022 at 9:17 PM Chao Sun wrote:
> Hi all,
>
> Here'
+1 (binding)
* Signature: ok
* Checksum : passed
* Rat check (1.8.0_191): passed
- mvn clean apache-rat:check
* Built from source (1.8.0_191): failed
- mvn clean install -DskipTests
- mvn -fae --no-transfer-progress -DskipTests -Dmaven.javadoc.skip=true
-Pnative -Drequire.openssl
+1 (binding)
* Signature: ok
* Checksum : ok
* Rat check (10.0.2): ok
- mvn clean apache-rat:check
* Built from source (10.0.2): ok
- mvn clean install -DskipTests
* Unit tests pass (10.0.2): ok
- mvn package -P runAllTests -Dsur
on a local rig here:
[image: image.png]
Stack
On Fri, Jul 29, 2022 at 11:48 AM Steve Loughran
wrote:
> I have put together a release candidate (RC1) for Hadoop 3.3.4
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.4-RC1/
>
> The git tag is re
Thanks for bringing up the topic Wei-Chiu. +1 on a 3.3.1 soon.
Was going to spend time testing
Yours,
S
On Wed, Jan 27, 2021 at 5:28 PM Wei-Chiu Chuang wrote:
> Hi all,
>
> Hadoop 3.3.0 was released half a year ago, and as of now we've accumulated
> more than 400 changes in the branch-3.3.
On Wed, Feb 3, 2021 at 6:41 AM Steve Loughran
wrote:
>
> Regarding blockers &c: how about we have a little hackathon where we try
> and get things in. This means a promise of review time from the people with
> commit rights and other people who understand the code (Stack?)
+1
* I verified src tgz is signed with the key from
https://people.apache.org/keys/committer/weichiu.asc
* Verified hash.
* Built from src w/ -Prelease profile
* Checked CHANGES against git log.
S
On Thu, May 13, 2021 at 12:55 PM Wei-Chiu Chuang wrote:
> Hello my fellow Hadoop developers,
>
On Tue, Jan 29, 2013 at 12:56 PM, Arun C Murthy wrote:
> Folks,
>
> There has been some discussions about incompatible changes in the
> hadoop-2.x.x-alpha releases on HADOOP-9070, HADOOP-9151, HADOOP-9192 and
> few other jiras. Frankly, I'm surprised about some of them since the
> 'alpha' monike
es things better since
> no amount of numbering lipstick will make the software better or viable for
> the long-term for both users and other projects. Worse, it will force HBase
> and other projects to deal with *even more* major Hadoop releases... which
> seems like a royal pita.
>
On Fri, Feb 1, 2013 at 3:03 AM, Tom White wrote:
> On Wed, Jan 30, 2013 at 11:32 PM, Vinod Kumar Vavilapalli
> wrote:
> > I still have a list of pending API/protocol cleanup in YARN that need to
> be
> > in before we even attempt supporting compatibility further down the road.
>
>
YARN requires
On Mon, Feb 4, 2013 at 10:46 AM, Arun C Murthy wrote:
> Would it better to have 2.0.3-alpha, 2.0.4-beta and then make 2.1 as a
> stable release? This way we just have one series (2.0.x) which is not
> suitable for general consumption.
>
>
That contains the versioning damage to the 2.0.x set. Th
On Mon, Feb 4, 2013 at 2:14 PM, Suresh Srinivas wrote:
> On Mon, Feb 4, 2013 at 1:07 PM, Owen O'Malley wrote:
>
> > I think that using "-(alpha,beta)" tags on the release versions is a
> really
> > bad idea.
>
>
> Why? Can you please share some reasons?
>
>
We already had a means for denoting 'al
+1
On Sun, Feb 17, 2013 at 1:48 PM, Colin McCabe wrote:
> Hi all,
>
> I would like to merge the HDFS-347 branch back to trunk. It's been
> under intensive review and testing for several months. The branch
> adds a lot of new unit tests, and passes Jenkins as of 2/15 [1]
>
> We have tested HDFS
Folks over at HBase would be interested in helping out.
What does a mentor have to do? I poked around the icfoss link but didn't
see list of duties (I've been know to be certified blind on occasion).
I am not up on the malleability of hdfs RPC; is it just a matter of adding
the trace info to a p
On Thu, Aug 15, 2013 at 2:15 PM, Arun C Murthy wrote:
> Folks,
>
> I've created a release candidate (rc2) for hadoop-2.1.0-beta that I would
> like to get released - this fixes the bugs we saw since the last go-around
> (rc1).
>
> The RC is available at:
> http://people.apache.org/~acmurthy/hadoo
On Wed, Aug 21, 2013 at 1:25 PM, Colin McCabe wrote:
> St.Ack wrote:
>
> > + Once I figured where the logs were, found that JAVA_HOME was not being
> > exported (don't need this in hadoop-2.0.5 for instance). Adding an
> > exported JAVA_HOME to my running shell which don't seem right but it took
What do others think? See here if you do not have access:
http://goo.gl/j05wkf
It might be a shade darker but I can't tell for sure. It looks way too
close to me.
I'd think we'd intentionally go out of our way to put a vendor's signature
color on our Apache software.
Asking here before I file
On Wed, Jan 29, 2014 at 3:01 PM, Stack wrote:
> What do others think? See here if you do not have access:
> http://goo.gl/j05wkf
>
> It might be a shade darker but I can't tell for sure. It looks way too
> close to me.
>
> I'd think we'd intentionally
I filed https://issues.apache.org/jira/browse/HDFS-5852 as a blocker. See
what ye all think.
Thanks,
St.Ack
On Wed, Jan 29, 2014 at 3:52 PM, Aaron T. Myers wrote:
> I just filed this JIRA as a blocker for 2.3:
> https://issues.apache.org/jira/browse/HADOOP-10310
>
> The tl;dr is that JNs will
On Wed, Jan 29, 2014 at 4:09 PM, Alejandro Abdelnur wrote:
> IMO we should't be distributing binaries. And if we do so,they should be
> built by a Jenkins job.
>
That would address the second item above.
I filed https://issues.apache.org/jira/browse/HDFS-5852 for the color issue
with a suggested
ems raised here.
On Wed, Jan 29, 2014 at 5:38 PM, Suresh Srinivas wrote:
> Stack,
>
> This seems to me like coloring the good work someone has done with an
> unneeded controversy. Color is a matter of choice. The person who did the
> fine work had all the rights to choose what
On Wed, Jan 29, 2014 at 5:44 PM, Vinod Kumar Vavilapalli wrote:
>
> This is unbelievable.
>
Bad opener Vinod.
> This was not deliberate for all I know.
>
Didn't think so (Didn't say it was).
> It is one of the user-names on a machine and it can be anything.
>
Good. Then it would be ea
On Wed, Jan 29, 2014 at 8:48 PM, Suresh Srinivas wrote:
>
> > Please be more civil in your communique. Your attack dog 'flair' has
> > likely ruined my little survey. No one is going to comment afraid that
> > they'll get their heads cut off.
> >
>
> Right next to imploring civil communique, you
On Wed, Jan 29, 2014 at 9:07 PM, Joe Bounour wrote:
> Hello
>
> I find fascinating how all the HWX folks jumped on Stack (Not taking any
> side, I am Switzerland/french), many against one
> As a developer, it seems not a relevant topic, true but to be fair,
> Hortonwork, Cl
On Wed, Jan 29, 2014 at 7:31 PM, Arun C Murthy wrote:
>
> Stack,
>
> Apologies for the late response, I just saw this.
>
> On Jan 29, 2014, at 3:33 PM, Stack wrote:
>
> Slightly related, I just ran into this looking back at my 2.2.0 download:
>
> [stack@c20
d
> through in-person discussion of the related parties and move forward.
> After that meeting, a synopsis email could be sent
> if that would help and fit the bigger community.
>
> Regards,
> Mohammad
>
>
>
> On Thursday, January 30, 2014 11:32 AM, Stack wrote:
Sorry for the delay.
On Wed, Jan 29, 2014 at 10:05 PM, Vinod Kumar Vavilapalli wrote:
>
> My response was to your direct association of the green color to HWX green
> as if it were deliberately done. Nobody here intentionally left a vendor's
> signature like you claimed.
I did not do what yo
On Mon, Feb 3, 2014 at 9:26 PM, Chris Douglas wrote:
>
> ...
> Please take this offline. -C
>
>
No problem.
St.Ack
On Sat, Oct 28, 2017 at 2:00 PM, Konstantin Shvachko
wrote:
> Hey guys,
>
> It is an interesting question whether Ozone should be a part of Hadoop.
>
I don't see a direct answer to this question. Is there one? Pardon me if
I've not seen it but I'm interested in the response.
I ask because IMO
Ok with you lot if a few of us open a branch to work on a non-blocking HDFS
client?
Intent is to finish up the old issue "HDFS-9924 [umbrella] Nonblocking HDFS
Access". On the foot of this umbrella JIRA is a proposal by the
heavy-lifter, Duo Zhang. Over in HBase, we have a limited async DFS client
inable. That piece of code
> should be
> maintained in HDFS. I am +1 as a participant in both communities.
>
> On Thu, May 3, 2018 at 9:14 AM, Stack wrote:
>
> > Ok with you lot if a few of us open a branch to work on a
> non-blocking HDFS
> &
On Fri, May 4, 2018 at 5:47 AM, Anu Engineer
wrote:
> Hi Stack,
>
>
>
> Why don’t we look at the design of what is being proposed? Let us post
> the design to HDFS-9924 and then if needed, by all means let us open a new
> Jira.
>
> That will make it easy to understan
Just to close the loop, I just made a branch named HDFS-13572 to match the
new non-blocking issue (after some nice encouragement posted up on the
JIRA).
Thanks,
S
On Tue, May 15, 2018 at 9:30 PM, Stack wrote:
> On Fri, May 4, 2018 at 5:47 AM, Anu Engineer
> wrote:
>
&g
On Tue, Sep 25, 2012 at 11:21 PM, Konstantin Shvachko
wrote:
> I think this is a great work, Todd.
> And I think we should not merge it into trunk or other branches.
> As I suggested earlier on this list I think this should be spinned off
> as a separate project or a subproject.
>
I'd be -1 on th
On Wed, Sep 26, 2012 at 4:21 PM, Konstantin Shvachko
wrote:
> Don't understand your argument. Else where?
You suggest users should download HDFS and then go to another project
(or subproject) -- i.e. 'elsewhere' -- to get a fundamental, a fix for
the SPOF. IMO, the SPOF-fix belongs in HDFS core.
On Thu, Sep 27, 2012 at 2:06 AM, Konstantin Shvachko
wrote:
> The SPOF is in HDFS. This project is about shared storage
> implementation, that could be replaced by NFS or BookKeeper or
> something else.
You cannot equate QJM to a solution that requires an NFS filer. A
filer is just not possible
alternatives. Since we can't require that our
> > >> applications compile (or link) against our updated schema, this
> creates
> > a
> > >> problem that PB was supposed to solve.
> > >
> > >
> > > This is scary, and it potentially
file: pb_drops_two.proto. Please use 'syntax =
"proto2";' or 'syntax = "proto3";' to specify a syntax version. (Defaulted
to proto2 syntax.)
input:2:1: Expected identifier, got: 2
Proto 2.5 does same:
$ ~/bin/protobuf-2.5.0/src/protoc --encode=Test pb_drop
On Wed, Mar 29, 2017 at 3:12 PM, Chris Douglas
wrote:
> On Wed, Mar 29, 2017 at 1:13 PM, Stack wrote:
> > Is the below evidence enough that pb3 in proto2 syntax mode does not drop
> > 'unknown' fields? (Maybe you want evidence that java tooling behaves the
> &g
On Thu, Mar 30, 2017 at 9:16 AM, Chris Douglas
wrote:
> On Wed, Mar 29, 2017 at 4:59 PM, Stack wrote:
> >> The former; an intermediate handler decoding, [modifying,] and
> >> encoding the record without losing unknown fields.
> >>
> >
> > I did not tr
+1
Downloaded, deployed to small cluster, and then ran an hbase loading on top
of it. Looks good.
Packaging wise, is it intentional that some jars show up a few times? I
can understand webapps bundling a copy but doesn't mapreduce depend on
commons?
share/hadoop/mapreduce/lib/hadoop-annotation
On Thu, Sep 18, 2014 at 12:48 AM, Vinayakumar B
wrote:
> Hi all,
>
> Currently *DFSInputStream *doen't allow reading a write-inprogress file,
> once all written bytes, by the time of opening an input stream, are read.
>
> To read further update on the same file, needs to be read by opening
> anot
In general +1 on 3.0.0. Its time. If we start now, it might make it out by
2016. If we start now, downstreamers can start aligning themselves to land
versions that suit at about the same time.
While two big items have been called out as possible incompatible changes,
and there is ongoing discussio
I added you fellows to hadoop common and to hadoop hdfs. Shout if it don't
work Zheng Hu.
S
On Mon, Feb 18, 2019 at 7:08 PM OpenInx wrote:
> Dear hdfs-dev:
>
>stakiar has been working on this issue:
> https://issues.apache.org/jira/browse/HDFS-3246, but he
>has no permission to attach hi
+1
On Mon, Oct 19, 2009 at 2:34 PM, Tsz Wo (Nicholas), Sze <
s29752-hadoop...@yahoo.com> wrote:
> DFSClient has a retry mechanism on block acquiring for read. If the number
> of retries attends to a certain limit (defined by
> dfs.client.max.block.acquire.failures), DFSClient will throw a
> Bloc
HDFS-630 is kinda critical to us over in hbase. We'd like to get it into
0.21 (Its been committed to TRUNK). Its probably hard to argue its a
blocker for 0.21. We could run a vote. Or should we just file it against
0.21.1 hdfs and commit it after 0.21 goes out? What would folks suggest?
Witho
I'd like to propose a vote on having hdfs-630 committed to 0.21 (Its already
been committed to TRUNK).
hdfs-630 adds having the dfsclient pass the namenode the name of datanodes
its determined dead because it got a failed connection when it tried to
contact it, etc. This is useful in the interval
have the improved patch applied to 0.21.
Thanks to all who voted.
St.Ack
On Mon, Dec 14, 2009 at 9:56 PM, stack wrote:
> I'd like to propose a vote on having hdfs-630 committed to 0.21 (Its
> already been committed to TRUNK).
>
> hdfs-630 adds having the dfsclient pass the nam
r chatting with Nicholas, TRUNK was cleaned of
the previous versions of hdfs-630 and we'll likely apply
0001-Fix-HDFS-630-trunk-svn-4.patch, a version of
0001-Fix-HDFS-630-0.21-svn-2.patch that works for TRUNK that includes
the Nicholas suggestions.
On Mon, Dec 14, 2009 at 9:56 PM, stack
Thanks all for voting (and discussing). The ayes have it. I'll go
commit HDFS-630 to 0.21.
St.Ack
On Mon, Jan 25, 2010 at 5:36 AM, Steve Loughran wrote:
> Cosmin Lehene wrote:
>>
>> Steve,
>> A DoS could not be done using excludedNodes.
>>
>> The blacklisting takes place only at DFSClientLevel.
I'd like to open a vote on committing HDFS-927 to both hadoop branch
0.20 and to 0.21.
HDFS-927 "DFSInputStream retries too many times for new block
location" has an odd summary but in short, its a better HDFS-127
"DFSClient block read failures cause open DFSInputStream to become
unusable". HDFS-
ht? We have two HDFS committer
> +1s (Stack and Nicholas) and nonbinding +1s from several others.
>
> Thanks
> -Todd
>
> On Thu, Feb 4, 2010 at 1:30 PM, Tsz Wo (Nicholas), Sze
> wrote:
>>
>> This is a friendly reminder for voting on committing HDFD-927 to 0.20
Please on committing HDFS-1024 to the hadoop 0.20 branch.
Background:
HDFS-1024 fixes possible trashing of fsimage because of failed copy
from 2NN and NN. Ordinarily, possible corruption of this proportion
would merit commit w/o need of a vote only Dhruba correctly notes that
UNLESS both NN and
Thanks to all who participated in the vote.
I'll commit in a minute.
St.Ack
On Fri, Apr 2, 2010 at 10:38 AM, Stack wrote:
> Please on committing HDFS-1024 to the hadoop 0.20 branch.
>
> Background:
>
> HDFS-1024 fixes possible trashing of fsimage because of failed co
Congrats lads.
St.Ack
On Wed, Jan 5, 2011 at 7:40 PM, Ian Holsman wrote:
> On behalf of the Apache Hadoop PMC, I would like to extend a warm welcome to
> the following people,
> who have all chosen to accept the role of committers on Hadoop.
>
> In no alphabetical order:
>
> - Aaron Kimball
> -
stack created HDFS-9187:
---
Summary: Check if tracer is null before using it
Key: HDFS-9187
URL: https://issues.apache.org/jira/browse/HDFS-9187
Project: Hadoop HDFS
Issue Type: Bug
Components
stack created HDFS-4580:
---
Summary: 0.95 site build failing with
'maven-project-info-reports-plugin: Could not find goal 'dependency-info''
Key: HDFS-4580
URL: https://issues.apache.org/j
stack created HDFS-5852:
---
Summary: Change the colors on the hdfs UI
Key: HDFS-5852
URL: https://issues.apache.org/jira/browse/HDFS-5852
Project: Hadoop HDFS
Issue Type: Bug
Reporter: stack
[
https://issues.apache.org/jira/browse/HDFS-5852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack resolved HDFS-5852.
-
Resolution: Later
> Change the colors on the hdfs UI
>
>
>
stack created HDFS-13565:
Summary: [um
Key: HDFS-13565
URL: https://issues.apache.org/jira/browse/HDFS-13565
Project: Hadoop HDFS
Issue Type: New Feature
Reporter: stack
--
This
stack created HDFS-13572:
Summary: [umbrella] Non-blocking HDFS Access for H3
Key: HDFS-13572
URL: https://issues.apache.org/jira/browse/HDFS-13572
Project: Hadoop HDFS
Issue Type: New Feature
[
https://issues.apache.org/jira/browse/HDFS-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack resolved HDFS-13565.
--
Resolution: Invalid
Smile [~ebadger]
Yeah, sorry about that lads. Bad wifi. Resolving as invalid.
>
[
https://issues.apache.org/jira/browse/HDFS-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack resolved HDFS-4184.
-
Resolution: Invalid
Resolving invalid as not enough detail.
The JIRA subject and description do not seem to
[
https://issues.apache.org/jira/browse/HDFS-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack reopened HDFS-4184:
-
Here, I reopened it for you (in case you can't)
> Add the ability for Client to provide m
stack created HDFS-4203:
---
Summary: After recoverFileLease, datanode gets stuck complaining
block '...has out of data GS may already be committed'
Key: HDFS-4203
URL: https://issues.apache.org/jira/browse
stack created HDFS-4239:
---
Summary: Means of telling the datanode to stop using a sick disk
Key: HDFS-4239
URL: https://issues.apache.org/jira/browse/HDFS-4239
Project: Hadoop HDFS
Issue Type
stack created HDFS-11368:
Summary: LocalFS does not allow setting storage policy so spew
running in local mode
Key: HDFS-11368
URL: https://issues.apache.org/jira/browse/HDFS-11368
Project: Hadoop HDFS
stack created HDFS-6047:
---
Summary: TestPread NPE inside in DFSInputStream
hedgedFetchBlockByteRange
Key: HDFS-6047
URL: https://issues.apache.org/jira/browse/HDFS-6047
Project: Hadoop HDFS
Issue Type
stack created HDFS-6803:
---
Summary: Documenting DFSClient#DFSInputStream expectations reading
and preading in concurrent context
Key: HDFS-6803
URL: https://issues.apache.org/jira/browse/HDFS-6803
Project
[
https://issues.apache.org/jira/browse/HDFS-14585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack reopened HDFS-14585:
--
Reopening. Commit message was missing the JIRA # so revert and reapply with
fixed commit message.
> Backp
[
https://issues.apache.org/jira/browse/HDFS-14585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack resolved HDFS-14585.
--
Resolution: Fixed
Reapplied w/ proper commit message. Re-resolving.
> Backport HDFS-8901 Use ByteBuffer
: 827883
Node Kind: directory
Schedule: normal
Last Changed Author: szetszwo
Last Changed Rev: 826906
Last Changed Date: 2009-10-20 00:16:25 + (Tue, 20 Oct 2009)
Reporter: stack
Running some loadings on hdfs I had one of these on the DN XX.XX.XX.139:51010:
{code}
2009-10-21 04:57
Repository UUID: 13f79535-47bb-0310-9956-ffa450edef68
Revision: 827883
Node Kind: directory
Schedule: normal
Last Changed Author: szetszwo
Last Changed Rev: 826906
Last Changed Date: 2009-10-20 00:16:25 + (Tue, 20 Oct 2009)
Reporter: stack
Running some loading tests against hdfs branch-0.21
[
https://issues.apache.org/jira/browse/HDFS-721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack resolved HDFS-721.
Resolution: Invalid
Working as designed. Closing.
> ERROR Block blk_XXX_1030 already exists in state RBW and t
[
https://issues.apache.org/jira/browse/HDFS-720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack resolved HDFS-720.
Resolution: Fixed
Fix Version/s: 0.21.0
Resolving as fixed by HDFS-690. I just ran my tests with hdfs-690 in
[
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack reopened HDFS-630:
Reopening so can submit improved patch.
> In DFSOutputStream.nextBlockOutputStream(), the client can exclude speci
ct: Hadoop HDFS
Issue Type: Task
Reporter: stack
This issue is about forward porting from branch-0.20-append the little namenode
api that facilitates stealing of a file's lease. The forward port would be an
adaption of hdfs-1520 and its companion patches, hdfs-1555 an
Project: Hadoop HDFS
Issue Type: Bug
Components: hdfs client
Affects Versions: 0.20-append, 0.22.0, 0.23.0
Reporter: stack
Priority: Critical
We are seeing the following issue around recoverLease over in hbaselandia.
DFSClient calls recoverLease to
[
https://issues.apache.org/jira/browse/HDFS-16540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Michael Stack resolved HDFS-16540.
--
Hadoop Flags: Reviewed
Resolution: Fixed
Merged to branch-3.3. and to trunk.
> D
Michael Stack created HDFS-16586:
Summary: Purge FsDatasetAsyncDiskService threadgroup; it causes
BPServiceActor$CommandProcessingThread IllegalThreadStateException 'fatal
exception and exit'
Key:
[
https://issues.apache.org/jira/browse/HDFS-16586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Michael Stack resolved HDFS-16586.
--
Fix Version/s: 3.4.0
3.2.4
3.3.4
Hadoop Flags
[
https://issues.apache.org/jira/browse/HDFS-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Michael Stack resolved HDFS-16684.
--
Hadoop Flags: Reviewed
Resolution: Fixed
Merged to trunk and branch-3.3. Resolving
Project: Hadoop HDFS
Issue Type: Bug
Components: hdfs client
Affects Versions: 0.20.205.1
Reporter: stack
Assignee: stack
The below commit broke hdfs-826 for hbase in 205 rc1. It changes the
accessiblity from public to package private on
85 matches
Mail list logo