+1 (binding)
On Thu, Jun 3, 2021 at 1:14 AM Akira Ajisaka wrote:
> Dear Hadoop developers,
>
> Given the feedback from the discussion thread [1], I'd like to start
> an official vote
> thread for the community to vote and start the 3.1 EOL process.
>
> What this entails:
>
> (1) an official anno
When Datanode was initially designed, Linux AIO was still early in its
adoption. Kernel support was there and the libraries were almost there. No
java support, of course. We would have to write a lot of native code for
it and use JNI. Also, AIO means bypassing kernel page cache since you are
doing
Successfully running on 1,000 clusters over 5 years proves the feature is
stable. It does not, however, give me assurance that it will perform well
in our env.
It will be nice if there is some data on its performance. On obvious
concern is, running into grossly unbalanced I/O load among drives. S
to me.
>
> Also worth noting the implementation doesn't try to achieve very fine
> grained balance in space. As long as all volumes has the available space
> within a threshold (10GB by default),
> it falls back to round-robin policy.
>
> On Tue, May 5, 202
Gabor,
If you want to release asap, you can simply revert HDFS-14941 in the
release branch for now. It is causing the issue and was committed after
3.1.3. This causes failure of the automated upgrade process and namenode
memory leak.
Kihwal
On Tue, Jun 23, 2020 at 8:47 AM Akira Ajisaka wrote:
Which layout change are you referring to? The only layout change I know of
was done in 2.7, IIRC. We backported that to 2.6 and did not see any
adverse effects at that time.
Is datanode using more heap all the time? Or is it running into trouble
when generating full block reports?
Kihwal
On Mon,
To be clear, we are running 2.8 and 2.10. Although we don't see any issues,
I am curious whether the change in heap usage is amplified on dense
datanodes.
Kihwal
On Tue, Oct 6, 2020 at 5:00 PM Kihwal Lee wrote:
> Which layout change are you referring to? The only layout change I know
tell which commit increases heap usage worse during upgrade.
>
>
>
> On Tue, Oct 6, 2020 at 3:01 PM Kihwal Lee wrote:
>
>> Which layout change are you referring to? The only layout change I know
>> of was done in 2.7, IIRC. We backported that to 2.6 and did not see any
>
+1 for the 100 char limit.
But I would have liked 132 columns more. :)
Kihwal
On Mon, May 24, 2021 at 1:46 PM Sean Busbey
wrote:
> Hi folks!
>
> The consensus seems pretty strongly in favor of increasing the line length
> limit. Do folks still want to see a formal VOTE thread?
>
>
> > On May 1
sun.security.krb5.KrbApReq was creating a static MD5 digest object and not
synchronizing access.
This has been fixed in jdk8u60.
http://hg.openjdk.java.net/jdk8u/jdk8u60/jdk/rev/02d6b1096e89
One of the visible symptom is RPC reader thread getting
ArrayIndexOutOfBoundsException from
sun.securi
I am not sure whether it was mentioned by anyone before, butI noticed that
client only changes do not trigger running anytest in hdfs-precommit. This is
because hadoop-hdfs-client does nothave any test.
Kihwal
From: Colin P. McCabe
To: "hdfs-dev@hadoop.apache.org"
Cc: "common-...@hadoo
I think a lot of "client-side" tests use MiniDFSCluster. I know mechanical
division is possible, but what about test coverage?
Kihwal
From: Haohui Mai
To: hdfs-dev@hadoop.apache.org; Kihwal Lee
Cc: "common-...@hadoop.apache.org"
Sent: Friday, October 23, 2015
I think we need HDFS-8950 and HDFS-7725 in 2.7.2.It should be easy to
backport/cherry-pick HDFS-7725. For HDFS-8950, it will be nice if Ming can
chime in.
Kihwal
From: Tsuyoshi Ozawa
To: "common-...@hadoop.apache.org"
Cc: Chris Nauroth ; "yarn-...@hadoop.apache.org"
; "hdfs-dev@hadoop
I will try to get them in or bug Daryn. HDFS-8498 doesn't seem a new bug, so I
kicked it out to 2.7.3.
Kihwal
From: Vinod Vavilapalli
To: "common-...@hadoop.apache.org" ; Kihwal Lee
Cc: "hdfs-dev@hadoop.apache.org" ; Chris Nauroth
; "yarn-...@ha
We found HDFS-9426. The rolling upgrade finalization is not backward
compatible.I.e. 2.7.1 or 2.6.x datanodes will ignore finalization.
So -1.
Kihwal
From: Vinod Kumar Vavilapalli
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org; mapreduce-...@hado
+1 (binding)- Verified the signature/digest.
- I've built the dist tree with the native support from the source.- Brought up
a single node cluster and ran a set of basic tests.
From: Vinod Kumar Vavilapalli
To: Hadoop Common ; hdfs-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org; ma
+1
Verified checksums. Built from source, deployed and ran a couple of jobs.
Kihwal
On 2/6/13 9:59 PM, "Arun C Murthy" wrote:
>Folks,
>
>I've created a release candidate (rc0) for hadoop-2.0.3-alpha that I
>would like to release.
>
>This release contains several major enhancements such as QJM
+1 I've checked out, built and deployed release-0.23.7-rc0 to a single
node cluster. Ran a couple of sample jobs.
Kihwal
On 4/11/13 2:55 PM, "Thomas Graves" wrote:
>I've created a release candidate (RC0) for hadoop-0.23.7 that I would like
>to release.
>
>This release is a sustaining release wi
+1 I've downloaded & built the RC and ran several tests on a single node
cluster.
Kihwal
On 5/28/13 11:00 AM, "Thomas Graves" wrote:
>
>I've created a release candidate (RC0) for hadoop-0.23.8 that I would like
>to release.
>
>This release is a sustaining release with several important bug fixe
+1
Built from source and ran a couple of jobs in a pseudo-distributed cluster.
Kihwal
On 6/3/13 2:51 PM, "Konstantin Boudnik" wrote:
>I have rolled out release candidate (rc2) for hadoop-2.0.5-alpha.
>
>The difference between rc1 and rc2 is the "optimistic release date" is
>set for
>06/06/2013
It seems the transition within hadoop needs to happen before deprecating
FileSystem. I think createNonRecursive() was marked deprecated from the
beginning under the assumption that this transition would happen soon. We
know that hasn't been the case. So the question is, do we transition the
portion
For the ext3 bug Colin mentioned, see
https://bugzilla.redhat.com/show_bug.cgi?id=592961. This was fixed in
2.6.32 and backported in RHEL 5.4 (or CENTOS). This has more to do with
file data and affects NN more. Since NN preallocates blocks for edits,
almost all data writes are done without modifyin
+1 Downloaded it and ran several sample tests in a pseudo-distributed
cluster.
Kihwal
On 7/1/13 12:20 PM, "Thomas Graves" wrote:
>I've created a release candidate (RC0) for hadoop-0.23.9 that I would like
>to release.
>
>The RC is available at:
>http://people.apache.org/~tgraves/hadoop-0.23.9-
Another blocker, HADOOP-9850, has been committed.
Kihwal
From: Arun C Murthy
To: Daryn Sharp
Cc: "" ;
"mapreduce-...@hadoop.apache.org" ;
"yarn-...@hadoop.apache.org" ;
"common-...@hadoop.apache.org"
Sent: Thursday, August 1, 2013 1:30 PM
Subject: Re: [
Sorry to hijack the thread but, I also wanted to mention Avro. See HADOOP-9672.
The version we are using has memory leak and inefficiency issues. We've seen
users running into it.
Kihwal
From: Tsuyoshi OZAWA
To: "common-...@hadoop.apache.org"
Cc: "hdfs-dev@h
We have found HADOOP-9880, which prevents Namenode HA from running with
security.
Kihwal
From: Arun C Murthy
To: "common-...@hadoop.apache.org" ;
"hdfs-dev@hadoop.apache.org" ;
"mapreduce-...@hadoop.apache.org" ;
"yarn-...@hadoop.apache.org"
Sent: Thursda
We have found HADOOP-9880, which prevents Namenode HA from running with
security.
Kihwal
From: Arun C Murthy
To: "common-...@hadoop.apache.org" ;
"hdfs-dev@hadoop.apache.org" ;
"mapreduce-...@hadoop.apache.org" ;
"yarn-...@hadoop.apache.org"
Sent: Thurs
It's your call, Arun. I.e. as long you believe rc2 meets the expectations and
objectives of 2.1.0-beta.
Kihwal
From: Arun Murthy
To: "common-...@hadoop.apache.org"
Cc: Kihwal Lee ; "mapreduce-...@hadoop.apache.org"
; "hdfs-d
I've changed the target version of HADOOP-9880 to 2.1.1. Please change it
back, if you feel that it needs to be in 2.1.0-beta.
Kihwal
From: Kihwal Lee
To: Arun Murthy ; "common-...@hadoop.apache.org"
Cc: "mapreduce-...@hadoop.apac
+1 Ran a set of tests against a single node cluster.
Kihwal
On Monday, October 7, 2013 2:02 AM, Arun C Murthy wrote:
Folks,
I've created a release candidate (rc0) for hadoop-2.2.0 that I would like to
get released - this release fixes a small number of bugs and some protocol/api
issues w
Missing blocks are the blocks with no valid replicas. I.e. data is not
available. Corrupt blocks are the ones with at least one corrupt replica. They
may have healthy live replicas that are not corrupt, which can be read by users
or replicated by the system to replace the corrupt replicas. "Mis
I've built the tag and ran some basic tests on a single node cluster.
+1 (binding)
Kihwal
On Tuesday, December 3, 2013 12:24 AM, Thomas Graves
wrote:
Hey Everyone,
There have been lots of improvements and bug fixes that have went into
branch-0.23 since the 0.23.9 release. We think its ti
+1 (binding)
- checked out the rc1 tag and built the source (-Pdist -Pnative)
- brought up a pseudo distributed cluster
- ran sample MR jobs
- verified web UIs working.
On Thu, Dec 7, 2017 at 9:22 PM, Konstantin Shvachko
wrote:
> Hi everybody,
>
> I updated CHANGES.txt and fixed documentation li
HADOOP-14060 is a blocker. Daryn will add more detail to the jira or to
this thread.
On Thu, Feb 8, 2018 at 7:01 AM, Brahma Reddy Battula
wrote:
> Hi Eddy,
>
> HDFS-12990 got committed to 3.0.1,can we have RC for 3.0.1 (only YARN-5742
> blocker is open ) ?
>
>
> On Sat, Feb 3, 2018 at 12:40 A
Simple commit builds are failing often
https://builds.apache.org/job/Hadoop-trunk-Commit/
Many trunk builds are failing on H19.
"protoc version is 'libprotoc 2.6.1', expected version is '2.5.0' "
On H4, a cmake version problem was seen.
The commit builds don't seem to be running in a docker con
+1 (binding)
- Built from source
- Brought up a single node cluster
- Checked basic HDFS functions and checked the UIs
- Ran several simple jobs.
On Mon, Sep 10, 2018 at 7:01 AM 俊平堵 wrote:
> Hi all,
>
> I've created the first release candidate (RC0) for Apache
> Hadoop 2.8.5. This is our
Are you using CMS? How big is young gen?
How often does the NN do young gen collection when it is slow?
On Tue, Sep 25, 2018 at 4:04 AM Lin,Yiqun(vip.com)
wrote:
> Hi hdfs developers:
>
> We meet a bad problem after rolling upgrade our hadoop version from
> 2.5.0-cdh5.3.2 to 2.6.0-cdh5.13.1. The
+1 (binding)Checked out the source and built.Ran basic hdfs and mapred tests on
a single node cluster
Kihwal
From: Vinod Kumar Vavilapalli
To: Hadoop Common ; hdfs-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Sent: Thursday, January 14, 2016 10:
Sergey,
The release 2.7.2 is being voted on. It should get released very soon.The patch
applies to 2.7.1 cleanly, so you could try it, if you can do build and dist on
your own.
Thanks,Kihwal
From: Сергей Казаков
To: hdfs-dev@hadoop.apache.org
Sent: Monday, January 25, 2016 12:35 AM
It's still here:
https://issues.apache.org/jira/plugins/servlet/project-config/HDFS/fields
But somehow not showing up on pages.
Kihwal
From: Arpit Agarwal
To: "common-...@hadoop.apache.org" ;
"hdfs-dev@hadoop.apache.org"
Sent: Friday, February 12, 2016 1:52 PM
Subject: 'Target Versi
Moving Hadoop 3 forward sounds fine. If EC is one of the main motivations, are
we getting rid of branch-2.8?
Kihwal
From: Andrew Wang
To: "common-...@hadoop.apache.org"
Cc: "yarn-...@hadoop.apache.org" ;
"mapreduce-...@hadoop.apache.org" ; hdfs-dev
Sent: Thursday, February 18, 201
Just reverted HDFS-8791 from branch-2.7.Eulogy: Although it has ascended to a
better version, it did caught an upgrade bug while in branch-2.7.
Kihwal
From: Vinod Kumar Vavilapalli
To: yarn-...@hadoop.apache.org
Cc: Hadoop Common ; hdfs-dev@hadoop.apache.org;
mapreduce-...@hadoop.apach
Kun,
(1) The client-facing counter part of federation is ViewFileSystem, aka client
side mount table.(2) Federation is supported in 2.7. There are test cases
bringing up federated mini cluster, so I assume setting up a pseudo distributed
cluster is possible. I am not sure whether all support sc
For name node, start from NameNodeRpcServer.
From: Kun Ren
To: hdfs-dev@hadoop.apache.org
Sent: Thursday, April 28, 2016 4:51 PM
Subject: handlerCount
Hi Genius,
I have a quick question:
I remembered I saw the default value for HandlerCout is 10(The number of
Handler threads), but
You might be issuing the refresh command against the dataXfer port, not the rpc
port of the datanode.
-Kihwal
From: Ajith shetty
To: "hdfs-dev@hadoop.apache.org"
Sent: Wednesday, April 1, 2015 1:43 AM
Subject: [Federation setup] Adding a new name node to federated cluster
Hi all,
+1 (binding)Built the source from the tag and ran basic test on pseudo
distributed cluster.
Kihwal
From: Vinod Kumar Vavilapalli
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Cc: vino...@apache.org
Sent: Frida
0.21 is not actively maintained.
The bug was fixed in other active branches (0.20-security, 0.22, 0.23, etc.)
and trunk.
Kihwal
On 10/12/11 7:50 PM, "Beckham007" wrote:
Hi,
Today, I use org.apache.hadoop.net.NetUtils Class to change IP address and
hostname.
normalizeHostName
but I think the
#3073 is stuck. It has been saying its running TestHASafMode for more than
three hours. It's not timing out.
Can Somebody with a Jenkins access please bounce this?
al on the master or slave would have made a difference.
Kihwal
On 8/22/12 11:49 PM, "Kihwal Lee" wrote:
>#3073 is stuck. It has been saying its running TestHASafMode for more
>than three hours. It's not timing out.
>
>Can Somebody with a Jenkins access please bounce this?
>
>
+1
From: Brahma Reddy Battula
To: "Gangumalla, Uma" ; Rakesh Radhakrishnan
; Sijie Guo
Cc: "d...@bookkeeper.apache.org" ; Uma gangumalla
; Vinayakumar B ;
"hdfs-dev@hadoop.apache.org" ;
"u...@hadoop.apache.org" ; "u...@bookkeeper.apache.org"
Sent: Thursday, July 28, 2016 4:21 AM
You might want the snapshot bug fix done in HDFS-7056. This bug creates
snapshot filediffs even if you never use snapshot. For 2.6, we will have to do
it in a separate jira to pick up the fix only. Related to this, HDFS-9696 might
be of interest too.
Kihwal
From: Chris Trezzo
To: Karthi
Done
From: Sean Busbey
To: "hdfs-dev@hadoop.apache.org"
Sent: Tuesday, August 16, 2016 1:05 PM
Subject: can someone add me to contributors for hte HDFS jira?
Hi!
I'm already in the contributor role on the HADOOP jira, could someone
add me as one on the HDFS jira? I'd like the abil
Is the system busy with I/O when it happens? Any other I/O activities preceding
the event? In your case DistCp could have generated extra edits and also
namenode daemon and audit log entries. Depending on configuration, dirty pages
can pile up quite a bit on Linux systems with a large memory an
I just noticed this during a trunk build. I was doing "mvn clean install
-DskipTests". The build succeeds.
Is anyone seeing this? I am using openjdk8u102.
===
[WARNING] Unable to process class org/apache/hadoop/hdfs/StripeReader.class in
JarAnalyzer File
/home1/kihwal/devel/apache/hadoo
fault locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.11.3", arch: "x86_64", family: "mac"
FYI
On Wed, Sep 28, 2016 at 3:54 PM, Kihwal Lee
wrote:
> I just noticed this during a trunk build. I was doing "mvn clean install
&g
on did the workaround.
I will file a jira to update maven-project-info-reports-plugin.
Kihwal
From: Arun Suresh
To: Kihwal Lee
Cc: Ted Yu ; Hdfs-dev ; Hadoop
Common
Sent: Thursday, September 29, 2016 6:58 PM
Subject: Re: Is anyone seeing this during trunk build?
It looks l
It looks like the NN is having trouble after reading in the VERSION file, which
contains the new layout version. We can continue the discussion in the jira.
Kihwal
From: Ravi Prakash
To: Dinesh Kumar Prabakaran
Cc: user ; hdfs-dev
Sent: Thursday, October 13, 2016 3:29 PM
Subject: R
Hi Vinay,
If I rephrase the question,
Does a RU rollback snapshot provide a consistent snapshot of the distributed
file system?
I don't think we aimed it to be a completely consistent snapshot. It is meant
to be a safe place to go back with the old version of software. This is
normally used as
Kihwal
From: Vinayakumar B
To: Kihwal Lee ; "hdfs-dev@hadoop.apache.org"
Sent: Friday, May 26, 2017 5:05 AM
Subject: RE: RollingUpgrade Rollback openfiles issue
Thanks Kihwal,
Found another case, which looks more dangerous.
1. File written and closed. FINALIZ
Thanks for driving the next 2.8 release, Junping. While I was committing a
blocker for 2.7.4, I noticed some of the jiras are back-ported to 2.7, but
missing in branch-2.8.2. Perhaps it is safer and easier to simply rebranch
2.8.2.
Thanks,Kihwal
On Thursday, July 20, 2017, 3:32:16 PM CDT, Junp
gt; The other remaining works are:
> - Revise the design doc
> - Post a test plan (Haohui is working on it.)
> - Execute the manual tests (Haohui and Fengdong will work on it.)
>
> The work was a collective effort of Nathan Roberts, Sanjay Radia, Suresh
> Srinivas, Kihwal Lee, Jing
If we ever respin 2.4.1, I strongly suggest HDFS-6527 be included.
Kihwal
On 6/19/14, 4:56 PM, "Akira AJISAKA" wrote:
>I think we should include this issue in 2.4.1, so I uploaded a patch to
>fix it. I'll appreciate your review.
>
>Thanks,
>Akira
>
>(2014/06/18 12:13), Vinod Kumar Vavilapalli
Checked out the source, built and started a single node cluster.
Ran a couple of sample jobs.
+1 (binding)
Kihwal
On 6/19/14, 10:14 AM, "Thomas Graves"
wrote:
>Hey Everyone,
>
>There have been various bug fixes that have went into
>branch-0.23 since the 0.23.10 release. We think its time to d
>> Thanks for the feedback Vinod, Akira & Kihwal.
>>
>> I'll re-spin rc1 with MAPREDUCE-5830 & HDFS-6527.
>>
>> @Kihwal - Can you, please, merge HDFS-6527 to branch-2.4 and
>>branch-2.4.1?
>>
>> thanks,
>> Arun
>>
>> On Ju
+1 (binding)
Kihwal
On 6/24/14, 3:53 AM, "Arun C Murthy" wrote:
>Folks,
>
> As discussed, I'd like to call a vote on changing our by-laws to change
>release votes from 7 days to 5.
>
> I've attached the change to by-laws I'm proposing.
>
> Please vote, the vote will the usual period of 7 days.
Now builds are failing because of this. Please make sure build works with
"-Pnative".
[exec] CMake Error at
/usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108
(message):
[exec] Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
[exec] Call Stack (most recent
+1 (binding)
Kihwal
On 8/8/14, 9:57 PM, "Karthik Kambatla" wrote:
>I have put together this proposal based on recent discussion on this
>topic.
>
>Please vote on the proposal. The vote runs for 7 days.
>
> 1. Migrate from subversion to git for version control.
> 2. Force-push to be disabled
I am sure many of us have seen strange jenkins behavior out of the precommit
builds.
- build artifacts missing
- serving build artifact belonging to another build. This also causes wrong
precommit results to be posted on the bug.
- etc.
The latest one I saw is disappearance of the unit test st
I suggest testing without federation first. That is, running two separate
yarn/hdfs instances. Once that works properly, you can introduce federation in
your config.
Kihwal
From: xeonmailinglist
To: hdfs-dev@hadoop.apache.org
Sent: Tuesday, March 3, 2015 4:32 AM
Subject: Questions abo
Federation is for sharing the data node storage spaces, not for sharing the
data among multiple clusters. Check out viewfs instead.
From: xeonmailinglist-yahoo
To: hdfs-dev@hadoop.apache.org
Sent: Tuesday, March 3, 2015 12:31 PM
Subject: Re: Questions about HDFS federation
I would
Hi Wei-Chiu,
We have experience with 5,000 - 6,000 node clusters. Although it ran/runs
fine, any heavy hitter activities such as decommissioning needed to be
carefully planned. In terms of files and blocks, we have multiple
clusters running stable with over 500M files and blocks. Some at over
It just means that hash collisions will be more frequent above the
capacity, causing more walks on hash chains. It is unclear at what level
you will see meaningful impact. In any case, it is not really a limit.
Kihwal
On Tue, Jun 25, 2019 at 2:17 AM Lars Francke wrote:
> Hi,
>
> I stumbled upo
+1
Kihwal
On Tue, Aug 20, 2019 at 10:03 PM Wangda Tan wrote:
> Hi all,
>
> This is a vote thread to mark any versions smaller than 2.7 (inclusive),
> and 3.0 EOL. This is based on discussions of [1]
>
> This discussion runs for 7 days and will conclude on Aug 28 Wed.
>
> Please feel free to sha
That's not supposed to happen. What version of Hadoop are you using? It
Please file a jira with details including how the namenodes are configured.
For the recovery:
First and foremost, do not shut down the active namenode. Put it into safe
mode and issue a saveNamespace command to create a checkp
Doing CRC32 on a huge data block also reduces its error detection
capability.
If you need more information on this topic, this paper will be a good
starting poing:
http://www.ece.cmu.edu/~koopman/networks/dsn02/dsn02_koopman.pdf
Kihwal
On 6/24/11 9:50 AM, "Doug Cutting" wrote:
> A smaller c
Does the backup process include syncing? On-drive write cache can also trick
you.
For absolutely critical data, it is a good idea to use a controller with
battery-backed write cache or a service/product that guarantees durability.
Kihwal
On 9/22/11 3:48 AM, "Gabi Kazav" wrote:
Hi,
I had Powe
[
https://issues.apache.org/jira/browse/HDFS-16123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-16123.
---
Resolution: Invalid
> NameNode standby checkpoint is abnormal, because the data disk is full, e
Kihwal Lee created HDFS-16127:
-
Summary: Improper pipeline close recovery causes a permanent write
failure or data loss.
Key: HDFS-16127
URL: https://issues.apache.org/jira/browse/HDFS-16127
Project
[
https://issues.apache.org/jira/browse/HDFS-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-5441.
--
Resolution: Won't Fix
It moved on to Jetty. No longer relevant.
> Wrong use of catalina
Kihwal Lee created HDFS-14948:
-
Summary: Improve HttpFS Server
Key: HDFS-14948
URL: https://issues.apache.org/jira/browse/HDFS-14948
Project: Hadoop HDFS
Issue Type: Improvement
Kihwal Lee created HDFS-14949:
-
Summary: HttpFS does not support getServerDefaults()
Key: HDFS-14949
URL: https://issues.apache.org/jira/browse/HDFS-14949
Project: Hadoop HDFS
Issue Type
[
https://issues.apache.org/jira/browse/HDFS-4935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-4935.
--
Resolution: Won't Fix
Symlink is currently disabled due to security reasons and bugs.
> add
Kihwal Lee created HDFS-15181:
-
Summary: Webhdfs getTrashRoot() caused internal
AccessControlException
Key: HDFS-15181
URL: https://issues.apache.org/jira/browse/HDFS-15181
Project: Hadoop HDFS
[
https://issues.apache.org/jira/browse/HDFS-15147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-15147.
---
Hadoop Flags: Reviewed
Resolution: Fixed
> LazyPersistTestCase wait logic is error-pr
Kihwal Lee created HDFS-15203:
-
Summary: A bug in ViewFileSystemBaseTest
Key: HDFS-15203
URL: https://issues.apache.org/jira/browse/HDFS-15203
Project: Hadoop HDFS
Issue Type: Bug
Kihwal Lee created HDFS-15287:
-
Summary: HDFS rollingupgrade prepare never finishes
Key: HDFS-15287
URL: https://issues.apache.org/jira/browse/HDFS-15287
Project: Hadoop HDFS
Issue Type: Bug
[
https://issues.apache.org/jira/browse/HDFS-15350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-15350.
---
Fix Version/s: 3.4.0
Hadoop Flags: Reviewed
Resolution: Fixed
>
[
https://issues.apache.org/jira/browse/HDFS-15348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-15348.
---
Resolution: Duplicate
> [SBN Read] IllegalStateException happened when doing failo
Kihwal Lee created HDFS-15357:
-
Summary: Do not trust bad block reports from clients
Key: HDFS-15357
URL: https://issues.apache.org/jira/browse/HDFS-15357
Project: Hadoop HDFS
Issue Type: Bug
[
https://issues.apache.org/jira/browse/HDFS-15287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-15287.
---
Resolution: Duplicate
> HDFS rollingupgrade prepare never finis
Kihwal Lee created HDFS-15421:
-
Summary: IBR leak causes standby NN to be stuck in safe mode
Key: HDFS-15421
URL: https://issues.apache.org/jira/browse/HDFS-15421
Project: Hadoop HDFS
Issue Type
Kihwal Lee created HDFS-15422:
-
Summary: Reported IBR is partially replaced with stored info when
queuing.
Key: HDFS-15422
URL: https://issues.apache.org/jira/browse/HDFS-15422
Project: Hadoop HDFS
Kihwal Lee created HDFS-15726:
-
Summary: Client should only complete a file if the last block is
finalized
Key: HDFS-15726
URL: https://issues.apache.org/jira/browse/HDFS-15726
Project: Hadoop HDFS
[
https://issues.apache.org/jira/browse/HDFS-15824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-15824.
---
Resolution: Invalid
> Update to enable TLS >=1.2 as default secure pro
[
https://issues.apache.org/jira/browse/HDFS-15825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-15825.
---
Resolution: Invalid
> Using a cryptographically weak Pseudo Random Number Generator (P
Kihwal Lee created HDFS-16034:
-
Summary: Disk failures exceeding the DFIP threshold do not
shutdown datanode
Key: HDFS-16034
URL: https://issues.apache.org/jira/browse/HDFS-16034
Project: Hadoop HDFS
[
https://issues.apache.org/jira/browse/HDFS-16034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-16034.
---
Resolution: Incomplete
> Disk failures exceeding the DFIP threshold do not shutdown datan
Kihwal Lee created HDFS-9178:
Summary: Slow datanode I/O can cause a wrong node to be marked bad
Key: HDFS-9178
URL: https://issues.apache.org/jira/browse/HDFS-9178
Project: Hadoop HDFS
Issue
Kihwal Lee created HDFS-9208:
Summary: Disabling atime may fail clients like distCp
Key: HDFS-9208
URL: https://issues.apache.org/jira/browse/HDFS-9208
Project: Hadoop HDFS
Issue Type: Bug
[
https://issues.apache.org/jira/browse/HDFS-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kihwal Lee resolved HDFS-5032.
--
Resolution: Fixed
> Write pipeline failures caused by slow or busy disk may not be handled
> pr
1 - 100 of 392 matches
Mail list logo