Denny Ye created HDFS-3471:
--
Summary: NullPointerException when NameNode is closing
Key: HDFS-3471
URL: https://issues.apache.org/jira/browse/HDFS-3471
Project: Hadoop HDFS
Issue Type: Improvement
: 0.21.0, 0.20.2
Reporter: Denny Ye
I met an exception when I would like to seek to latest size of file that
another client was writing. Message is "Cannot seek after EOF". I got the seek
target from previous input stream and now I trying to obtains the file
incremental. It
Affects Versions: 0.20.205.0
Reporter: Denny Ye
Priority: Minor
NameNode provides RPC service for not only DFS client but also user defined
program. A common case we always met is that user transfers file path prefixed
with HDFS protocol("hdfs://{namenode:{port}}/{f
HDFS notification
--
Key: HDFS-2760
URL: https://issues.apache.org/jira/browse/HDFS-2760
Project: Hadoop HDFS
Issue Type: New Feature
Affects Versions: 0.20.2
Reporter: Denny Ye
Priority: Minor
Reporter: Denny Ye
Priority: Critical
hadoop is into a recovery mode and save namespace to disk before the system
starting service. however, there are many situation will cause hadoop enter
recovery mode like missing VERSION file and ckpt file exists due to last
failure of
Reporter: Denny Ye
1.1 first shutdown, then restart and then the fsimage was loaded and saved to
disk and editlog was cleared.
1.2 shutdown again when in safe mode to make sure no change in editlog, then
restart and then the fsimage was loaded and save to disk again, but the editlog
was
[
https://issues.apache.org/jira/browse/HDFS-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Denny Ye resolved HDFS-2176.
Resolution: Not A Problem
Break when removing can avoid the ConcurrentModicationException. It's my
: Improvement
Affects Versions: 0.21.0
Reporter: Denny Ye
Priority: Minor
Below code may cause ConcurrentModificationException when some of fsimage
directory equals editlog directory :
Method: FSImage.setStorageDirectories(Collection fsNameDirs,
Collection fsEditsDirs)
Code
Affects Versions: 0.21.0
Reporter: Denny Ye
Priority: Minor
Client created file at NameNode and following crashed down. It did not write
any byte to HDFS. After one hour, the NameNode aware that it need to close file
and reclaim lease. Here have two steps for removing
*Root cause*: Wrong FSImage format when user killed hdfs process. It may
read invalid block
number, may be 1 billion or more, OutOfMemoryError happens before
EOFException.
How can we provide the validity of FSImage file?
--regards
Denny Ye
On Tue, Jun 28, 2011 at 4:44 PM, mac fang wrote:
>
Components: hdfs client
Affects Versions: 0.21.0
Reporter: Denny Ye
:abc:root > bin/hadoop fs -count -q /ABC
11/06/07 18:05:54 INFO security.Groups: Group mapping
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=30
11/06/07 18:05:54 WARN conf.Configurat
11 matches
Mail list logo