Other approach would be asking which tar to build.
mapred-tar (mapred and common)
hdfs-tar (hdfs and common)
hadoop-tar (all)
In this case, hbase can just use hdfs-tar.
-Bharath
From: Ravi Teja
To: mapreduce-...@hadoop.apache.org; common-...@hadoop.apache.org
Affects Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.23.0
As a part of removing reference to conf in DFSClient, I am proposing replacing
FsPermission.getUMask(conf) everywhere in DFSClient class with
Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.23.0
This is an umbrella jira to track removing the references to conf object in
DFSClient library.
--
This message is automatically generated by
[
https://issues.apache.org/jira/browse/HDFS-2103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Bharath Mundlapudi resolved HDFS-2103.
--
Resolution: Not A Problem
Didn't notice the finally block, where read lock is rel
: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.23.0
In FSNamesystem.getBlockLocationsUpdateTimes function, we have the following
code:
{code}
for (int attempt = 0; attempt < 2; attempt++) {
if (attempt == 0) { // fi
: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.23.0
Write pipeline can fail for various reasons like rpc connection issues, disk
problem etc. I am proposing to add metrics to detect write pipeline issues.
--
This message is automatically generated by JIRA.
For
Affects Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.23.0
At present, DFSClient stores reference to configuration object. Since, these
configuration objects are pretty big at times can blot the processes which has
[
https://issues.apache.org/jira/browse/HDFS-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Bharath Mundlapudi resolved HDFS-2072.
--
Resolution: Duplicate
> Remove StringUtils.stringifyException(ie) in logger functi
Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.23.0
Apache logger api has an overloaded function which can take the message and
exception. I am proposing to clean the logging code with this api.
ie.:
Change the code
Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.23.0
The following code can throw NPE if callGetBlockLocations returns null.
If server returns null
{code}
List locatedblocks
= callGetBlockLocations(namenode, src, 0,
Long.MAX_VALUE).getLocatedBlocks
Issue Type: Bug
Components: data-node
Affects Versions: 0.20.204.0, 0.20.205.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.20.205.0
As a part of datanode process hang, this part of code was introduced in
0.20.204 to clean up
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Priority: Minor
Fix For: 0.23.0
Fixing the Namenode upgrade option along the same line as Namenode format
option.
If clusterid is not given then clusterid will be automatically generated for
the
Affects Versions: 0.20.205.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.20.205.0
Since we have multiple Jira's in trunk for common and hdfs, I am creating
another jira for this issue.
This patch addresses the following:
1. Pro
Issue Type: Bug
Components: data-node
Affects Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Priority: Minor
Fix For: 0.23.0
This new method FileUtil.list will throw an exception when disk is bad rather
Components: data-node
Affects Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Currently, if all block pool service threads exit, Datanode continue to run.
This should be fixed.
--
This message is automatically generated by JIRA.
For
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Priority: Minor
Currently, namenode -genclusterid is a helper utility to generate unique
clusterid. This option is useless once namenode -format automatically generates
the clusterid.
--
This message is
Versions: 0.20.205.0, 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.20.205.0, 0.23.0
While testing Disk Fail Inplace, We encountered the NPE from this part of the
code.
File[] files = dir.listFiles();
for (File f : files
Correct way to format a namenode :
/bin/hdfs namenode -format -clusterid
PS: Set your environment right like common home etc.
Only first time it requires the cluster id, second time onwards it will
remember cluster id and prompt you to format this particular cluster id.
I have filed a Jira
Versions: 0.23.0
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Priority: Minor
Fix For: 0.23.0
While setting up 0.23 version based cluster, i ran into this issue. When i
issue a format namenode command, which got changed in 23, it should let
Hello Hiba,
The file fileHDFS will be stored under user home directory. Lets say you run
this command as 'hdfsclient' user. Then it should be located at
/user/hdfsclient/fileHDFS. You could also specify the actual path where you
want to store the file.
Like,
$PATH/bin/hadoop dfs -copyFromLocal
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Priority: Minor
Fix For: 0.23.0
For the append call, discover file not found exception early and avoid extra
server call.
--
This message is automatically generated by JIRA.
For more
: Bug
Components: data-node
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
In secure mode, when disks fail more than volumes tolerated, datanode process
doesn't exit properly and it just hangs even though shutdown method is called.
--
This
.20.1
Reporter: Bharath Mundlapudi
Assignee: Bharath Mundlapudi
Fix For: 0.20.4
Datanode startup doesn't honor volumes.tolerated for hadoop 20 version.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
: name-node
Affects Versions: 0.22.0
Reporter: Bharath Mundlapudi
Priority: Minor
Fix For: 0.22.0
I am proposing a footprint optimization to merge blockReplication and
preferredBlockSize fields into one 'long header' field in INodeFile class. Thi
24 matches
Mail list logo