Hadoop-Hdfs-trunk - Build # 1205 - Failure

2012-10-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1205/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12510 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.732 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.015 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.937 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.045 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.038 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.164 sec

Results :

Tests in error: 
  
testSetrepIncWithUnderReplicatedBlocks(org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks):
 test timed out after 30 milliseconds
  testBalancer0(org.apache.hadoop.hdfs.server.balancer.TestBalancer): test 
timed out after 10 milliseconds
  org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes

Tests run: 1608, Failures: 0, Errors: 3, Skipped: 4

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:24:08.580s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:24:09.355s
[INFO] Finished at: Wed Oct 24 12:57:57 UTC 2012
[INFO] Final Memory: 17M/644M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4637
Updating HDFS-4090
Updating HADOOP-8811
Updating HDFS-2434
Updating YARN-181
Updating MAPREDUCE-4229
Updating YARN-140
Updating YARN-179
Updating HADOOP-8962
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1205

2012-10-24 Thread Apache Jenkins Server
See 

Changes:

[vinodkv] YARN-181. Fixed eclipse settings broken by capacity-scheduler.xml 
move via YARN-140. Contributed by Siddharth Seth.

[daryn] Updating credits for MAPREDUCE-4229.

[sseth] YARN-179. Fix some unit test failures. (Contributed by Vinod Kumar 
Vavilapalli)

[sseth] MAPREDUCE-4637. Handle TaskAttempt diagnostic updates while in the NEW 
and UNASSIGNED states. (Contributed by Mayank Bansal)

[daryn] MAPREDUCE-4229. Intern counter names in the JT (bobby via daryn)

[daryn] MAPREDUCE-4229. Intern counter names in the JT (bobby via daryn)

[suresh] Moving HDFS-2434 from Release 2.0.3 section to trunk section.

[suresh] HDFS-2434. TestNameNodeMetrics.testCorruptBlock fails intermittently. 
Contributed by Jing Zhao.

[bobby] HADOOP-8962. RawLocalFileSystem.listStatus fails when a child filename 
contains a colon (jlowe via bobby)

[bobby] HADOOP-8811. Compile hadoop native library in FreeBSD (Radim Kolar via 
bobby)

[daryn] HDFS-4090. getFileChecksum() result incompatible when called against 
zero-byte files. (Kihwal Lee via daryn)

--
[...truncated 12317 lines...]
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.975 sec
Running org.apache.hadoop.hdfs.TestBlockMissingException
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.505 sec
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.814 sec
Running org.apache.hadoop.hdfs.TestLeaseRenewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.931 sec
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.124 sec
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.233 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.674 sec
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.609 sec
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.391 sec
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.498 sec <<< 
FAILURE!
org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes  Time elapsed: 2953 
sec  <<< ERROR!
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
at java.util.HashMap$EntryIterator.next(HashMap.java:834)
at java.util.HashMap$EntryIterator.next(HashMap.java:832)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.shutdown(FsVolumeImpl.java:209)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.shutdown(FsVolumeList.java:168)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.shutdown(FsDatasetImpl.java:1222)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1158)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1412)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1392)
at 
org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes.shutdownCluster(TestWebHdfsWithMultipleNameNodes.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.ut

[jira] [Resolved] (HDFS-693) java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write exceptions were cast when trying to read file via StreamFile.

2012-10-24 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-693.
--

Resolution: Not A Problem

This issue has gone stale - resolving. The timeouts result from a variety of 
operations (i.e. actual timeouts due to the client disappearing - tasks getting 
killed, etc., hitting transceiver limits, etc.) - not due to a bug in HDFS. Or 
at least, not anymore.

Resolving as Not A Problem. Please file a new ticket if you are seeing unwanted 
timeouts on healthy clients/environments/configuration.

> java.net.SocketTimeoutException: 48 millis timeout while waiting for 
> channel to be ready for write exceptions were cast when trying to read file 
> via StreamFile.
> 
>
> Key: HDFS-693
> URL: https://issues.apache.org/jira/browse/HDFS-693
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.20.1
>Reporter: Yajun Dong
> Attachments: HDFS-693.log
>
>
> To exclude the case of network problem, I found the count of  dataXceiver is 
> about 30.  Also, I could see the output of netstate -a | grep 50075 has many 
> TIME_WAIT status when this happened.
> partial log in attachment. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Ask for help?

2012-10-24 Thread Xiaohan
In our production environment. We encount a problem about the performance of 
NameNode.
We configure the sharestorge of NameNode with bookkeeper. And our version of 
hadoop is 2.0.1, bk is 4.1.0.

The problem is: When the hdfs system has run for a while(2-3 days), we found 
the performance descreased dramatically!
The benchmark with nnbench from 
hadoop-mapreduce-client-jobclient-2.0.1-tests.jar is like:

First use:
./yarn jar 
../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.0.1-tests.jar 
nnbench -operation create_write -numberOfFiles 10
We get:
12/10/20 20:05:43 INFO hdfs.NNBench:TPS: Create/Write/Close: 52

Two days later, we get:
12/10/23 18:34:42 INFO hdfs.NNBench:TPS: Create/Write/Close: 1
//The "Avg exec time (ms): Create/Write/Close:" is even larger, maybe than 
1000ms, so the TPS here may be smaller for precision.

And the logs in NameNode, we found the difference from each of the times:

2012-10-20 20:05:43,249 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
 Number of syncs: 1347 SyncTimes(ms): 14138 3677

2012-10-22 18:34:42,223 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
 Number of syncs: 51 SyncTimes(ms): 34553 312

We inspect that it is the problem of Bookkeeper. Anyone ever encounter that or 
any clue for that? Thanks very much.
The environment is strictly controlled, and the logs can only be copied by 
hand. So the logs are not so detailed.


Re: Ask for help?

2012-10-24 Thread Ivan Kelly
On Wed, Oct 24, 2012 at 02:44:45PM +, Xiaohan wrote:
> And the logs in NameNode, we found the difference from each of the times:
> 
> 2012-10-20 20:05:43,249 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog:  Number of syncs: 1347 
> SyncTimes(ms): 14138 3677
> 
> 2012-10-22 18:34:42,223 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog:  Number of syncs: 51 
> SyncTimes(ms): 34553 312
> 
> We inspect that it is the problem of Bookkeeper. Anyone ever encounter that 
> or any clue for that? Thanks very much.
> The environment is strictly controlled, and the logs can only be copied by 
> hand. So the logs are not so detailed.
How many bookies are you using? Are any of the bookies displaying disk
errors? what does iostat say on the bookies and on the namenode? 

It does look like the editlog is the culprit here. However it's not
clear that it's BK. If BK is the shared edits, it should be second in
the list of journals. From the sync times, the second journal seems to
be performing fine.

-Ivan


[jira] [Created] (HDFS-4108) In a secure cluster, In the HDFS WEBUI , clicking on a datanode in the node list , gives an error

2012-10-24 Thread Benoy Antony (JIRA)
Benoy Antony created HDFS-4108:
--

 Summary: In a secure cluster, In the HDFS WEBUI , clicking on a 
datanode in the node list , gives an error
 Key: HDFS-4108
 URL: https://issues.apache.org/jira/browse/HDFS-4108
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security, webhdfs
Affects Versions: 1.1.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor


This issue happens in secure cluster.

To reproduce :

Go to the NameNode WEB UI. (dfshealth.jsp)
Click to bring up the list of LiveNodes  (dfsnodelist.jsp)
Click on a datanode to bring up the filesystem  web page ( browsedirectory.jsp)

The page containing the directory listing does not come up.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4091) Add snapshot quota to limit the number of snapshots

2012-10-24 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4091.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)
 Hadoop Flags: Reviewed

Suresh, thanks for the review.

I have committed this.

> Add snapshot quota to limit the number of snapshots
> ---
>
> Key: HDFS-4091
> URL: https://issues.apache.org/jira/browse/HDFS-4091
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>  Labels: needs-test
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: h4091_20121021.patch
>
>
> Once a directory has been set to snapshottable, users could create snapshots 
> of the directories.  In this JIRA, we add a quota to snapshottable 
> directories.  The quota is set by admin in order to limit the number of 
> snapshots allowed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: need some help understanding NN quorum edit log

2012-10-24 Thread Sujee Maniyam
Hi Todd,
thanks!

So
FSEditLog.java ::  createJournal(URI uri)
is where a specific journal is chosen.
Config parameters are : dfs.namenode.shared.edits.dir  and
dfs.namenode.edits.journal-plugin

correct?

regards
Sujee

http://sujee.net


On Tue, Oct 23, 2012 at 4:49 PM, Todd Lipcon  wrote:
> Hi Sujee,
>
> QuorumJournalManager implements the JournalManager interface. The javadoc
> on JournalManager may help you understand. Also, I would recommend
> understanding the local-disk implementation (FileJournalManager) before
> digging too deep into the QuorumJournalManager, which is a bit more complex.
>
> -Todd
>
> On Tue, Oct 23, 2012 at 3:31 PM, Sujee Maniyam  wrote:
>
>> Hi devs,
>> I am reading through the HA code, trying to understand how the
>> editlogs propagate.
>>
>> I have looked at the following classes
>> EditLogTailer
>> JournalNode
>> QuorumJournalManager
>> NameNode
>> FSNameSystem
>>
>> I see FSNameSystem is using FSImage editlog.   where is the tie-in
>> between  NN and  QuorumJournal?
>>
>> thanks
>> Sujee Maniyam
>>
>> http://sujee.net
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera


Re: need some help understanding NN quorum edit log

2012-10-24 Thread Todd Lipcon
Yep, that's right.

-Todd

On Wed, Oct 24, 2012 at 4:58 PM, Sujee Maniyam  wrote:

> Hi Todd,
> thanks!
>
> So
> FSEditLog.java ::  createJournal(URI uri)
> is where a specific journal is chosen.
> Config parameters are : dfs.namenode.shared.edits.dir  and
> dfs.namenode.edits.journal-plugin
>
> correct?
>
> regards
> Sujee
>
> http://sujee.net
>
>
> On Tue, Oct 23, 2012 at 4:49 PM, Todd Lipcon  wrote:
> > Hi Sujee,
> >
> > QuorumJournalManager implements the JournalManager interface. The javadoc
> > on JournalManager may help you understand. Also, I would recommend
> > understanding the local-disk implementation (FileJournalManager) before
> > digging too deep into the QuorumJournalManager, which is a bit more
> complex.
> >
> > -Todd
> >
> > On Tue, Oct 23, 2012 at 3:31 PM, Sujee Maniyam  wrote:
> >
> >> Hi devs,
> >> I am reading through the HA code, trying to understand how the
> >> editlogs propagate.
> >>
> >> I have looked at the following classes
> >> EditLogTailer
> >> JournalNode
> >> QuorumJournalManager
> >> NameNode
> >> FSNameSystem
> >>
> >> I see FSNameSystem is using FSImage editlog.   where is the tie-in
> >> between  NN and  QuorumJournal?
> >>
> >> thanks
> >> Sujee Maniyam
> >>
> >> http://sujee.net
> >>
> >
> >
> >
> > --
> > Todd Lipcon
> > Software Engineer, Cloudera
>



-- 
Todd Lipcon
Software Engineer, Cloudera


[jira] [Created] (HDFS-4109) ?Formatting HDFS running into errors :( - Many thanks

2012-10-24 Thread ajames (JIRA)
ajames created HDFS-4109:


 Summary: ?Formatting HDFS running into errors :( - Many thanks 
 Key: HDFS-4109
 URL: https://issues.apache.org/jira/browse/HDFS-4109
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.20.2
 Environment: Windows 7
Cygwin installed
downloaded hadoop-0.20.2.tar (apparently works best with Win 7?)
Reporter: ajames


Hi,

I am trying to format the Hadoop file system with:

bin/hadoop namenode -format

But I received this error in Cygwin:

/home/anjames/bin/../conf/hadoop-env.sh: line 8: $’\r’: command not found
/home/anjames/bin/../conf/hadoop-env.sh: line 14: $’\r’: command not found
/home/anjames/bin/../conf/hadoop-env.sh: line 17: $’\r’: command not found
/home/anjames/bin/../conf/hadoop-env.sh: line 25: $’\r’: command not found
/bin/java; No  such file or directoryjre7
/bin/java; No  such file or directoryjre7
/bin/java; cannot execute: No such file or directory

I had previous modified the following conf files the cygwin/home/anjames 
directory
1. core-site.xml 
2. mapred-site.xml 
3. hdfs-site.xml 

4. hadoop-env.sh

-I updated this file using the instructions: "uncomment the JAVA_HOME export 
command, and set the path to your Java home (typically C:/Program 
Files/Java/{java-home}"

i.e. In the "hadoop-env.sh" file, I took out the "#" infront of JAVA_HOME 
comment and changed the path as follows:

export JAVA_HOME=C:\Progra~1\Java\jre7



The hadoop-env.sh file is now:



# Set Hadoop-specific environment variables here.


# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.


# The java implementation to use.  
export JAVA_HOME=C:\Progra~1\Java\jre7 ###<-uncommented and revised code

# Extra Java CLASSPATH elements.  Optional.

# export HADOOP_CLASSPATH=


# The maximum amount of heap to use, in MB. Default is 1000.

# export HADOOP_HEAPSIZE=2000

# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server


# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote 
$HADOOP_BALANCER_OPTS"
export JAVA_HOME=C:\Progra~1\Java\jre7
HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS

# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

# Where log files are stored.  $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

# host:path where hadoop code should be rsync'd from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1

# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids

# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HADOOP_NICENESS=10


--

I'm trying to get back in the programming swing with a Big Data Analytics 
course, so any help is much appreciated its been a while, many thanks. 



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4097) provide CLI support for createSnapshot

2012-10-24 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HDFS-4097.
---

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)
 Hadoop Flags: Reviewed

+1 for the new patch. Committed the patch.

Thank you Brandon.

> provide CLI support for createSnapshot
> --
>
> Key: HDFS-4097
> URL: https://issues.apache.org/jira/browse/HDFS-4097
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs client, name-node
>Affects Versions: Snapshot (HDFS-2802)
>Reporter: Brandon Li
>Assignee: Brandon Li
>  Labels: needs-test
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4097.patch, HDFS-4097.patch, HDFS-4097.patch, 
> HDFS-4097.patch, HDFS-4097.patch
>
>
> provide CLI support for create/delete/list snapshots

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4109) ?Formatting HDFS running into errors :( - Many thanks

2012-10-24 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HDFS-4109.
---

Resolution: Invalid

> ?Formatting HDFS running into errors :( - Many thanks 
> --
>
> Key: HDFS-4109
> URL: https://issues.apache.org/jira/browse/HDFS-4109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.20.2
> Environment: Windows 7
> Cygwin installed
> downloaded hadoop-0.20.2.tar (apparently works best with Win 7?)
>Reporter: ajames
>  Labels: file, format, help, system
>
> Hi,
> I am trying to format the Hadoop file system with:
> bin/hadoop namenode -format
> But I received this error in Cygwin:
> /home/anjames/bin/../conf/hadoop-env.sh: line 8: $’\r’: command not found
> /home/anjames/bin/../conf/hadoop-env.sh: line 14: $’\r’: command not found
> /home/anjames/bin/../conf/hadoop-env.sh: line 17: $’\r’: command not found
> /home/anjames/bin/../conf/hadoop-env.sh: line 25: $’\r’: command not found
> /bin/java; No  such file or directoryjre7
> /bin/java; No  such file or directoryjre7
> /bin/java; cannot execute: No such file or directory
> I had previous modified the following conf files the cygwin/home/anjames 
> directory
> 1. core-site.xml 
> 2. mapred-site.xml 
> 3. hdfs-site.xml 
> 4. hadoop-env.sh
> -I updated this file using the instructions: "uncomment the JAVA_HOME export 
> command, and set the path to your Java home (typically C:/Program 
> Files/Java/{java-home}"
> i.e. In the "hadoop-env.sh" file, I took out the "#" infront of JAVA_HOME 
> comment and changed the path as follows:
> export JAVA_HOME=C:\Progra~1\Java\jre7
> The hadoop-env.sh file is now:
> 
> # Set Hadoop-specific environment variables here.
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
> # The java implementation to use.  
> export JAVA_HOME=C:\Progra~1\Java\jre7 ###<-uncommented and revised code
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
> # The maximum amount of heap to use, in MB. Default is 1000.
> # export HADOOP_HEAPSIZE=2000
> # Extra Java runtime options.  Empty by default.
> # export HADOOP_OPTS=-server
> # Command specific options appended to HADOOP_OPTS when specified
> export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote 
> $HADOOP_NAMENODE_OPTS"
> export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote 
> $HADOOP_SECONDARYNAMENODE_OPTS"
> export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote 
> $HADOOP_DATANODE_OPTS"
> export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote 
> $HADOOP_BALANCER_OPTS"
> export JAVA_HOME=C:\Progra~1\Java\jre7
> HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote 
> $HADOOP_JOBTRACKER_OPTS"
> # export HADOOP_TASKTRACKER_OPTS=
> # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
> # export HADOOP_CLIENT_OPTS
> # Extra ssh options.  Empty by default.
> # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
> # Where log files are stored.  $HADOOP_HOME/logs by default.
> # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
> # File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
> # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
> # host:path where hadoop code should be rsync'd from.  Unset by default.
> # export HADOOP_MASTER=master:/home/$USER/src/hadoop
> # Seconds to sleep between slave commands.  Unset by default.  This
> # can be useful in large clusters, where, e.g., slave rsyncs can
> # otherwise arrive faster than the master can service them.
> # export HADOOP_SLAVE_SLEEP=0.1
> # The directory where pid files are stored. /tmp by default.
> # export HADOOP_PID_DIR=/var/hadoop/pids
> # A string representing this instance of hadoop. $USER by default.
> # export HADOOP_IDENT_STRING=$USER
> # The scheduling priority for daemon processes.  See 'man nice'.
> # export HADOOP_NICENESS=10
> --
> I'm trying to get back in the programming swing with a Big Data Analytics 
> course, so any help is much appreciated its been a while, many thanks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4110) Refine JNStorage log

2012-10-24 Thread liang xie (JIRA)
liang xie created HDFS-4110:
---

 Summary: Refine JNStorage log
 Key: HDFS-4110
 URL: https://issues.apache.org/jira/browse/HDFS-4110
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: journal-node
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: liang xie
Priority: Minor


Abstract class Storage has a toString method: 
{quote}
return "Storage Directory " + this.root;
{quote}

and in the subclass JNStorage we could see:
{quote}
LOG.info("Formatting journal storage directory " + 
sd + " with nsid: " + getNamespaceID());
{quote}

that'll print sth like "Formatting journal storage directory Storage Directory 
x"

Just one line change to:
{quota}
LOG.info("Formatting journal " + sd + " with nsid: " + getNamespaceID());
{quota}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira