Use "dfsadmin -setQuota ..." without faults reported
Key: HDFS-2113
URL: https://issues.apache.org/jira/browse/HDFS-2113
Project: Hadoop HDFS
Issue Type: Bug
Move ReplicationMonitor to block management
---
Key: HDFS-2112
URL: https://issues.apache.org/jira/browse/HDFS-2112
Project: Hadoop HDFS
Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZ
HI, Todd,
we use the 0.21 version. I think we used the 'kill -9'. The possible timing
is when startup or checkpoint.
regards
macf
On Tue, Jun 28, 2011 at 11:03 PM, Todd Lipcon wrote:
> Hi Denny,
>
> Which version of Hadoop are you using, and when are you killing the
> NameNode? Are you using a
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/764/
###
## LAST 60 LINES OF THE CONSOLE
###
[...truncated 1197 lines...]
[javac] NumberReplicas num =
I have updated meetup:
http://www.meetup.com/Hadoop-Contributors/events/23890191/
sanjay
On Jun 28, 2011, at 4:27 PM, Sanjay Radia wrote:
On Jun 28, 2011, at 11:17 AM, Sanjay Radia wrote:
We have a room confirmed from 10-3pm on Friday July 1st at Yahoo's
sunnyvale campus.
Will update the
[
https://issues.apache.org/jira/browse/HDFS-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Daryn Sharp resolved HDFS-1901.
---
Resolution: Fixed
> Fix TestHDFSCLI and TestDFSShell
>
>
>
Add tests for ensuring that the DN will start with a few bad data directories
(Part 1 of testing DiskChecker)
-
Key: HDFS-2111
URL: https://issues.apache.o
Hi Denny,
Which version of Hadoop are you using, and when are you killing the
NameNode? Are you using a unix signal (eg kill -9) or killing power to the
whole machine?
Thanks
-Todd
On Tue, Jun 28, 2011 at 2:11 AM, Denny Ye wrote:
> *Root cause*: Wrong FSImage format when user killed hdfs proce
*Root cause*: Wrong FSImage format when user killed hdfs process. It may
read invalid block
number, may be 1 billion or more, OutOfMemoryError happens before
EOFException.
How can we provide the validity of FSImage file?
--regards
Denny Ye
On Tue, Jun 28, 2011 at 4:44 PM, mac fang wrote:
> Hi,
Hi, Team,
What we found when we use the Hadoop is, the FSImage often currupts when we
do start/stop the Hadoop cluster. The reason we think might be around the
write to the outputstream: the NameNode may be killed when it saveNamespace,
then the FsImage file doesn't complete writing. Currently i s
10 matches
Mail list logo