Re: (HDFS-1943) fail to start datanode while start-dfs.sh is executed by root user

2011-05-25 Thread Wei Yongjun
Nobady take care of this fix?

This bug is introduced by HDFS-1150. Verify datanodes' identities to clients in 
secure clusters.

> fail to start datanode while start-dfs.sh is executed by root user
> --
>
>  Key: HDFS-1943
>  URL: https://issues.apache.org/jira/browse/HDFS-1943
>  Project: Hadoop HDFS
>   Issue Type: Bug
>   Components: scripts
> Affects Versions: 0.23.0
> Reporter: Wei Yongjun
> Priority: Blocker
>  Fix For: 0.23.0
>
>
> When start-dfs.sh is run by root user, we got the following error message:
> # start-dfs.sh
> Starting namenodes on [localhost ]
> localhost: namenode running as process 2556. Stop it first.
> localhost: starting datanode, logging to 
> /usr/hadoop/hadoop-common-0.23.0-SNAPSHOT/bin/../logs/hadoop-root-datanode-cspf01.out
> localhost: Unrecognized option: -jvm
> localhost: Could not create the Java virtual machine.
>
> The -jvm options should be passed to jsvc when we starting a secure
> datanode, but it still passed to java when start-dfs.sh is run by root
> while secure datanode is disabled. This is a bug of bin/hdfs.
>
>
> --
> This message is automatically generated by JIRA.
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>


HDFS-903 - Backupnode always downloading image from Namenode after this change

2011-05-25 Thread Sreehari G
Hi all , 
 
In HDFS-903 - ( md5 verification of fsimage ) , with this change ,
Backupnode is downloading the image & edit files from namenode everytime
since the difference in checkpoint time is always maintined b/w Namenode and
Backupnode . This happens since Namenode is resetting its checkpoint time
everytime since we are ignoring renewCheckpointTime and passing true
explicitly to rollFsimage during endcheckpoint .. 
 
Also , though a proposal for using md5 to decide whether to download image
is also mentioned , it doesnt seeem to be implemented .. 
 
Isn't this downloading of image everytime a problem or am I missing
something ? 

HUAWEI TECHNOLOGIES CO.,LTD. huawei_logo 


Solitaire
Domlur
Bangalore
www.huawei.com

-
This e-mail and its attachments contain confidential information from
HUAWEI, which 
is intended only for the person or entity whose address is listed above. Any
use of the 
information contained herein in any way (including, but not limited to,
total or partial 
disclosure, reproduction, or dissemination) by persons other than the
intended 
recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by 
phone or email immediately and delete it!

 


Re: HDFS-903 - Backupnode always downloading image from Namenode after this change

2011-05-25 Thread Todd Lipcon
Hi Sreehari,

The BackupNode isn't really production-ized. I don't think anyone uses it,
so it's not surprising that there are a lot of broken parts.

If you're interested in contributing in this area, I would encourage you to
look at the HDFS-1073 branch. The BN and CheckpointNode will be undergoing
various surgery during the course of this project, so it's a good
opportunity to fix it up, add test coverage, etc.

-Todd

On Wed, May 25, 2011 at 1:08 AM, Sreehari G  wrote:

>  Hi all ,
>
> In HDFS-903 - ( md5 verification of fsimage ) , with this change ,
> Backupnode is downloading the image & edit files from namenode everytime
> since the difference in checkpoint time is always maintined b/w Namenode and
> Backupnode . This happens since Namenode is resetting its checkpoint time
> everytime since we are ignoring renewCheckpointTime and passing true
> explicitly to rollFsimage during endcheckpoint ..
>
> Also , though a proposal for using md5 to decide whether to download image
> is also mentioned , it doesnt seeem to be implemented ..
>
> Isn't this downloading of image everytime a problem or am I missing
> something ?
>
> HUAWEI TECHNOLOGIES CO.,LTD. [image: huawei_logo]
>
>
> Solitaire
> Domlur
> Bangalore
> www.huawei.com
>
> -
> This e-mail and its attachments contain confidential information from
> HUAWEI, which
> is intended only for the person or entity whose address is listed above.
> Any use of the
> information contained herein in any way (including, but not limited to,
> total or partial
> disclosure, reproduction, or dissemination) by persons other than the
> intended
> recipient(s) is prohibited. If you receive this e-mail in error, please
> notify the sender by
> phone or email immediately and delete it!
>
>



-- 
Todd Lipcon
Software Engineer, Cloudera


Re: HDFS-903 - Backupnode always downloading image from Namenode after this change

2011-05-25 Thread Konstantin Shvachko
Hi Sreehari,

I actually also see this problem. It would be good to fix it.
You can simply create a jira describing the issue and we can discuss
there how to fix it. If you have a patch - even better.
Once fixed in trunk it can be ported to other branches,
including 0.22 or HDFS-1073-branch if desired.

Thanks,
--Konstantin

On Wed, May 25, 2011 at 10:45 AM, Todd Lipcon  wrote:

> Hi Sreehari,
>
> The BackupNode isn't really production-ized. I don't think anyone uses it,
> so it's not surprising that there are a lot of broken parts.
>
> If you're interested in contributing in this area, I would encourage you to
> look at the HDFS-1073 branch. The BN and CheckpointNode will be undergoing
> various surgery during the course of this project, so it's a good
> opportunity to fix it up, add test coverage, etc.
>
> -Todd
>
> On Wed, May 25, 2011 at 1:08 AM, Sreehari G  wrote:
>
> >  Hi all ,
> >
> > In HDFS-903 - ( md5 verification of fsimage ) , with this change ,
> > Backupnode is downloading the image & edit files from namenode everytime
> > since the difference in checkpoint time is always maintined b/w Namenode
> and
> > Backupnode . This happens since Namenode is resetting its checkpoint time
> > everytime since we are ignoring renewCheckpointTime and passing true
> > explicitly to rollFsimage during endcheckpoint ..
> >
> > Also , though a proposal for using md5 to decide whether to download
> image
> > is also mentioned , it doesnt seeem to be implemented ..
> >
> > Isn't this downloading of image everytime a problem or am I missing
> > something ?
> >
> > HUAWEI TECHNOLOGIES CO.,LTD. [image: huawei_logo]
> >
> >
> > Solitaire
> > Domlur
> > Bangalore
> > www.huawei.com
> >
> >
> -
> > This e-mail and its attachments contain confidential information from
> > HUAWEI, which
> > is intended only for the person or entity whose address is listed above.
> > Any use of the
> > information contained herein in any way (including, but not limited to,
> > total or partial
> > disclosure, reproduction, or dissemination) by persons other than the
> > intended
> > recipient(s) is prohibited. If you receive this e-mail in error, please
> > notify the sender by
> > phone or email immediately and delete it!
> >
> >
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>


[jira] [Resolved] (HDFS-1131) each DFSClient instance uses a daemon thread for lease checking, there should be a singleton daemon for all DFSClient instances

2011-05-25 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-1131.
--

Resolution: Duplicate

> each DFSClient instance uses a daemon thread for lease checking, there should 
> be a singleton daemon for all DFSClient instances
> ---
>
> Key: HDFS-1131
> URL: https://issues.apache.org/jira/browse/HDFS-1131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Alejandro Abdelnur
>Priority: Critical
>
> When accessing HDFS from a server application that acts on behalf of several 
> users each user has its own file system handle (DFSClient), this creates one 
> daemon thread per file system instance.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hadoop-Hdfs-trunk-Commit - Build # 684 - Failure

2011-05-25 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/684/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2644 lines...]
[junit] Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 9.062 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.232 sec
[junit] Running 
org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.774 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 18.609 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 31.399 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.844 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.17 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 13.215 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.563 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.201 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.086 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.888 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.639 sec
[junit] Running 
org.apache.hadoop.hdfs.server.namenode.TestPendingReplication
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.297 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.069 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.646 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 10.695 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.461 sec
[junit] Running org.apache.hadoop.net.TestNetworkTopology
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.114 sec
[junit] Running org.apache.hadoop.security.TestPermission
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.225 sec

checkfailure:
[touch] Creating 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:712:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:669:
 The following error occurred while executing this line:
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:737:
 Tests failed!

Total time: 9 minutes 6 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
7 tests failed.
REGRESSION:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that 
failed

Stack Trace:
junit.framework.AssertionFailedError: One of the tests failed. See the Detailed 
results to identify the command that failed
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:264)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:126)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:81)


REGRESSION:  org.apache.hadoop.hdfs.TestDFSShell.testErrOutPut

Error Message:
 -lsr should fail 

Stack Trace

Re: Hadoop-Hdfs-trunk-Commit - Build # 684 - Failure

2011-05-25 Thread Todd Lipcon
Sorry all, this just needs HDFS--1983. I will commit it momentarily

-Todd

On Wed, May 25, 2011 at 3:33 PM, Apache Jenkins Server
 wrote:
> See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/684/
>
> ###
> ## LAST 60 LINES OF THE CONSOLE 
> ###
> [...truncated 2644 lines...]
>    [junit] Running org.apache.hadoop.hdfs.server.datanode.TestDiskError
>    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 9.062 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.datanode.TestInterDatanodeProtocol
>    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.232 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
>    [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.774 sec
>    [junit] Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
>    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 18.609 sec
>    [junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
>    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 31.399 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.namenode.TestComputeInvalidateWork
>    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.844 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.namenode.TestDatanodeDescriptor
>    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.17 sec
>    [junit] Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
>    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 13.215 sec
>    [junit] Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
>    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.563 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.namenode.TestHeartbeatHandling
>    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.201 sec
>    [junit] Running org.apache.hadoop.hdfs.server.namenode.TestHost2NodesMap
>    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.086 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
>    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.888 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.namenode.TestOverReplicatedBlocks
>    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.639 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.namenode.TestPendingReplication
>    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.297 sec
>    [junit] Running 
> org.apache.hadoop.hdfs.server.namenode.TestReplicationPolicy
>    [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.069 sec
>    [junit] Running org.apache.hadoop.hdfs.server.namenode.TestSafeMode
>    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.646 sec
>    [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
>    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 10.695 sec
>    [junit] Running org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
>    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.461 sec
>    [junit] Running org.apache.hadoop.net.TestNetworkTopology
>    [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.114 sec
>    [junit] Running org.apache.hadoop.security.TestPermission
>    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.225 sec
>
> checkfailure:
>    [touch] Creating 
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build/test/testsfailed
>
> BUILD FAILED
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:712:
>  The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:669:
>  The following error occurred while executing this line:
> /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/build.xml:737:
>  Tests failed!
>
> Total time: 9 minutes 6 seconds
> [FINDBUGS] Skipping publisher since build result is FAILURE
> Recording fingerprints
> Archiving artifacts
> Recording test results
> Publishing Javadoc
> Publishing Clover coverage report...
> No Clover report will be published due to a Build Failure
> Email was triggered for: Failure
> Sending email for trigger: Failure
>
>
>
> ###
> ## FAILED TESTS (if any) 
> ##
> 7 tests failed.
> REGRESSION:  org.apache.hadoop.cli.TestHDFSCLI.testAll
>
> Error Message:
> One of the tests failed. See the Detailed results to identify the command 
> that failed
>
> Stack Trace:
> junit.framework.AssertionFailedError: One of the tests failed. See the 
> Detailed results to identify the command that failed
>        at 
> org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHe

[jira] [Created] (HDFS-1996) ivy: hdfs test jar should be independent to common test jar

2011-05-25 Thread Tsz Wo (Nicholas), SZE (JIRA)
ivy: hdfs test jar should be independent to common test jar
---

 Key: HDFS-1996
 URL: https://issues.apache.org/jira/browse/HDFS-1996
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Eric Yang


hdfs tests and common tests may require different libraries, e.g. common tests 
need ftpserver for testing {{FTPFileSystem}} but hdfs does not.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1997) Image transfer process misreports client side exceptions

2011-05-25 Thread Todd Lipcon (JIRA)
Image transfer process misreports client side exceptions


 Key: HDFS-1997
 URL: https://issues.apache.org/jira/browse/HDFS-1997
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Todd Lipcon
Assignee: Todd Lipcon




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1998) make refresh-namodenodes.sh refreshing all namenodes

2011-05-25 Thread Tanping Wang (JIRA)
make refresh-namodenodes.sh refreshing all namenodes


 Key: HDFS-1998
 URL: https://issues.apache.org/jira/browse/HDFS-1998
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.0
Reporter: Tanping Wang
Assignee: Tanping Wang
Priority: Minor
 Fix For: 0.23.0


refresh-namenodes.sh is used to refresh name nodes in the cluster to check for 
updates of include/exclude list.  It is used when decommissioning or adding a 
data node.  Currently it only refreshes the name node who serves the defaultFs, 
if there is defaultFs defined.  Fix it by refreshing all the name nodes in the 
cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-2000) Missing deprecation for io.bytes.per.checksum

2011-05-25 Thread Aaron T. Myers (JIRA)
Missing deprecation for io.bytes.per.checksum
-

 Key: HDFS-2000
 URL: https://issues.apache.org/jira/browse/HDFS-2000
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.22.0, 0.23.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 0.22.0, 0.23.0


Hadoop long ago deprecated the configuration "io.bytes.per.checksum" in favor 
of "dfs.bytes-per-checksum", but when the programmatic deprecation support was 
added, we didn't add an entry for this pair.

This is causing some tests to fail on branch-0.22 since the inclusion of 
HADOOP-7287, since some tests which were inadvertently using default config 
values are now having their settings actually picked up.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-1999) Tests use deprecated configs

2011-05-25 Thread Aaron T. Myers (JIRA)
Tests use deprecated configs


 Key: HDFS-1999
 URL: https://issues.apache.org/jira/browse/HDFS-1999
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.22.0, 0.23.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 0.22.0, 0.23.0


A few of the HDFS tests (not intended to test deprecation) use config keys 
which are deprecated.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-2001) HDFS-1073: Kill previous.checkpoint, lastcheckpoint.tmp directories

2011-05-25 Thread Todd Lipcon (JIRA)
HDFS-1073: Kill previous.checkpoint, lastcheckpoint.tmp directories
---

 Key: HDFS-2001
 URL: https://issues.apache.org/jira/browse/HDFS-2001
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Todd Lipcon
Assignee: Todd Lipcon




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-2002) Incorrect computation of needed blocks in getTurnOffTip()

2011-05-25 Thread Konstantin Shvachko (JIRA)
Incorrect computation of needed blocks in getTurnOffTip()
-

 Key: HDFS-2002
 URL: https://issues.apache.org/jira/browse/HDFS-2002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
Reporter: Konstantin Shvachko
 Fix For: 0.22.0


{{SafeModeInfo.getTurnOffTip()}} under-reports the number of blocks needed to 
reach the safemode threshold.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira