Re:HDFS start-up with safe mode?
Hi, I guess that something about "threshold 0.9990". When HDFS start up, it come in safe mode first, then check a value(I don't know what value or percent?) of my hadoop,and fine the value below 99.9%, so the safe mode will not turn off? but the conclusion of the log file is "Safe mode will be turned off automatically"? I'm lost. ___ 2011-04-08 11:58:21,036 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON. >>> The reported blocks 0 needs additional 2 blocks to reach the threshold >>> 0.9990 of total blocks 3. Safe mode will be turned off automatically. ____ - Original Message - From: "springring" To: Sent: Friday, April 08, 2011 2:20 PM Subject: Fw: start-up with safe mode? > > >> >>> Hi, >>> >>> When I start up hadoop, the namenode log show "STATE* Safe mode ON" like >>> that , how to set it off? >> I can set it off with command "hadoop fs -dfsadmin leave" after start >> up, but how can I just start HDFS >> out of Safe mode? >>> Thanks. >>> >>> Ring >>> >>> the startup >>> log >>> >>> 2011-04-08 11:58:20,655 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: >>> Initializing JVM Metrics with processName=NameNode, sessionId=null >>> 2011-04-08 11:58:20,657 INFO >>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: >>> Initializing NameNodeMeterics using context >>> object:org.apache.hadoop.metrics.spi.NullContext >>> 2011-04-08 11:58:20,678 INFO org.apache.hadoop.hdfs.util.GSet: VM type >>> = 32-bit >>> 2011-04-08 11:58:20,678 INFO org.apache.hadoop.hdfs.util.GSet: 2% max >>> memory = 17.77875 MB >>> 2011-04-08 11:58:20,678 INFO org.apache.hadoop.hdfs.util.GSet: capacity >>> = 2^22 = 4194304 entries >>> 2011-04-08 11:58:20,678 INFO org.apache.hadoop.hdfs.util.GSet: >>> recommended=4194304, actual=4194304 >>> 2011-04-08 11:58:20,697 INFO >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hdfs >>> 2011-04-08 11:58:20,697 INFO >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup >>> 2011-04-08 11:58:20,697 INFO >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>> isPermissionEnabled=true >>> 2011-04-08 11:58:20,701 INFO >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>> dfs.block.invalidate.limit=1000 >>> 2011-04-08 11:58:20,701 INFO >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), >>> accessTokenLifetime=0 min(s) >>> 2011-04-08 11:58:20,976 INFO >>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: >>> Initializing FSNamesystemMetrics using context >>> object:org.apache.hadoop.metrics.spi.NullContext >>> 2011-04-08 11:58:21,001 INFO org.apache.hadoop.hdfs.server.common.Storage: >>> Number of files = 17 >>> 2011-04-08 11:58:21,007 INFO org.apache.hadoop.hdfs.server.common.Storage: >>> Number of files under construction = 0 >>> 2011-04-08 11:58:21,007 INFO org.apache.hadoop.hdfs.server.common.Storage: >>> Image file of size 1529 loaded in 0 seconds. >>> 2011-04-08 11:58:21,007 INFO org.apache.hadoop.hdfs.server.common.Storage: >>> Edits file /tmp/hadoop-hdfs/dfs/name/current/edits of size 4 edits # 0 >>> loaded in 0 seconds. >>> 2011-04-08 11:58:21,009 INFO org.apache.hadoop.hdfs.server.common.Storage: >>> Image file of size 1529 saved in 0 seconds. >>> 2011-04-08 11:58:21,022 INFO org.apache.hadoop.hdfs.server.common.Storage: >>> Image file of size 1529 saved in 0 seconds. >>> 2011-04-08 11:58:21,032 INFO >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading >>> FSImage in 339 msecs >>> 2011-04-08 11:58:21,036 INFO org.apache.hadoop.hdfs.StateChange: STATE* >>> Safe mode ON. >>> The reported blocks 0 needs additional 2 blocks to reach the threshold >>> 0.9990 of total blocks 3. Safe mode will be turned off automatically. >>>
Re: Re:HDFS start-up with safe mode?
I modify the value of "dfs.safemode.threshold.pct" to zero, now everything is ok. log file as below But there are still three questions 1.. Can I regain percentage of blocks that should satisfy the minimal replication requirement to 99.9%? hadoop balancer? For I feel it will be more safe. 2. I set "dfs.safemode.threshold.pct" to "0" or "0f", two value both work, but which one is better? I guess "0" 3. When HDFS start up in safe mode, the log file should show "The reported blocks 0 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 3. Safe mode will 'not' be turned off automatically." There miss a word "not" , right? Ring / SHUTDOWN_MSG: Shutting down NameNode at computeb-05.pcm/172.172.2.6 / 2011-04-08 16:33:37,312 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: / STARTUP_MSG: Starting NameNode STARTUP_MSG: host = computeb-05.pcm/172.172.2.6 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.2-CDH3B4 STARTUP_MSG: build = -r 3aa7c91592ea1c53f3a913a581dbfcdfebe98bfe; compiled by 'root' on Mon Feb 21 17:31:12 EST 2011 / 2011-04-08 16:33:37,441 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null 2011-04-08 16:33:37,443 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext 2011-04-08 16:33:37,464 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit 2011-04-08 16: 2011-04-08 16:33:37,832 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 4 2011-04-08 16:33:37,832 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0 2011-04-08 16:33:37,832 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs. 2011-04-08 16:33:37,832 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2011-04-08 16:33:37,832 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 4 blocks 2011-04-08 16:33:37,835 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list 2011-04-08 16:33:37,849 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9100 - Original Message - From: "springring" To: Cc: Sent: Friday, April 08, 2011 3:45 PM Subject: Re:HDFS start-up with safe mode? > Hi, > >I guess that something about "threshold 0.9990". When HDFS start up, > it come in safe mode first, then check a value(I don't know what value or > percent?) > of my hadoop,and fine the value below 99.9%, so the safe mode will not turn > off? > > but the conclusion of the log file is "Safe mode will be turned off > automatically"? > > I'm lost. > ___ > 2011-04-08 11:58:21,036 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe > mode ON. >>>> The reported blocks 0 needs additional 2 blocks to reach the threshold >>>> 0.9990 of total blocks 3. Safe mode will be turned off automatically. > > > - Original Message - > From: "springring" > To: > Sent: Friday, April 08, 2011 2:20 PM > Subject: Fw: start-up with safe mode? > > >> >> >>> >>>> Hi, >>>> >>>> When I start up hadoop, the namenode log show "STATE* Safe mode ON" like >>>> that , how to set it off? >>> I can set it off with command "hadoop fs -dfsadmin leave" after start >>> up, but how can I just start HDFS >>> out of Safe mode? >>>> Thanks. >>>> >>>> Ring >>>> >>>> the startup >>>> log >>>> >>>> 2011-04-08 11:58:20,655 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: >>>> Initializing JVM Metrics with processName=NameNode, sessionId=null >>>> 2011-04-08 11:58:20,657 INFO >>>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: >>>> Initializing NameNodeMeterics using context >>>> object:org.apache.hadoop.metrics.spi.NullContext >&