Hadoop-Hdfs-trunk - Build # 1380 - Still Failing

2013-04-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1380/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 14318 lines...]
java.lang.AssertionError: SBN should have still been checkpointing.
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testStandbyExceptionThrownDuringCheckpoint(TestStandbyCheckpoints.java:279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)

Running org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.666 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.074 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.444 sec

Results :

Failed tests:   
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints):
 SBN should have still been checkpointing.

Tests run: 32, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:29:32.777s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:18.359s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [59.367s]
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:32:51.274s
[INFO] Finished at: Mon Apr 22 13:06:36 UTC 2013
[INFO] Final Memory: 64M/992M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #1380

2013-04-22 Thread Apache Jenkins Server
See 

--
[...truncated 14125 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 


Cannot communicate

2013-04-22 Thread Kevin Burton
I am relatively new to Hadoop and am working through a Manning publication 
"Hadoop in Action". One of the first program in the book (page 44) gives me a 
Java exception: org.apache.hadoop.ipc.RemoteException: Server IPC version 7 
cannot communicate with client version 3.

My Hadoop distribution is CDH4. The Java Maven project takes its dependency 
from Apache. The exception comes from a line involving the "Configuration" 
class.

Any idea on how to avoid this exception?

Re: Cannot communicate

2013-04-22 Thread Ted Yu
The exception was due to incompatible RPC versions between Apache maven
artifacts and CDH4.

I suggest you build the project with same hadoop version as in your cluster.

On Mon, Apr 22, 2013 at 7:50 AM, Kevin Burton wrote:

> I am relatively new to Hadoop and am working through a Manning publication
> "Hadoop in Action". One of the first program in the book (page 44) gives me
> a Java exception: org.apache.hadoop.ipc.RemoteException: Server IPC version
> 7 cannot communicate with client version 3.
>
> My Hadoop distribution is CDH4. The Java Maven project takes its
> dependency from Apache. The exception comes from a line involving the
> "Configuration" class.
>
> Any idea on how to avoid this exception?


Re: Cannot communicate

2013-04-22 Thread Kevin Burton
What dependency for the Maven project should I use?

On Apr 22, 2013, at 10:02 AM, Ted Yu  wrote:

> The exception was due to incompatible RPC versions between Apache maven
> artifacts and CDH4.
> 
> I suggest you build the project with same hadoop version as in your cluster.
> 
> On Mon, Apr 22, 2013 at 7:50 AM, Kevin Burton wrote:
> 
>> I am relatively new to Hadoop and am working through a Manning publication
>> "Hadoop in Action". One of the first program in the book (page 44) gives me
>> a Java exception: org.apache.hadoop.ipc.RemoteException: Server IPC version
>> 7 cannot communicate with client version 3.
>> 
>> My Hadoop distribution is CDH4. The Java Maven project takes its
>> dependency from Apache. The exception comes from a line involving the
>> "Configuration" class.
>> 
>> Any idea on how to avoid this exception?


A Look at HDFS 4721

2013-04-22 Thread Varun Sharma
Hi,

Can someone please take a look at HDFS 4721 ?

Thanks
Varun


[jira] [Created] (HDFS-4724) Provide API for checking whether lease is recovered or not

2013-04-22 Thread Ted Yu (JIRA)
Ted Yu created HDFS-4724:


 Summary: Provide API for checking whether lease is recovered or not
 Key: HDFS-4724
 URL: https://issues.apache.org/jira/browse/HDFS-4724
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu


recoverLease() returns a boolean indicating whether file has been successfully 
finalized and closed. In case false is returned, client should use another API 
to query whether lease is recovered or not.

Necessity for this new API stems from the fact that recoverLease() 
unconditionally enqueues a block for recovery. So if client calls 
recoverLease() continuously, previous recovery attempts would be preempted.

See HBASE-8389 and HDFS-4721 for such scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4725) fix HDFS file handle leaks

2013-04-22 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4725:
---

 Summary: fix HDFS file handle leaks
 Key: HDFS-4725
 URL: https://issues.apache.org/jira/browse/HDFS-4725
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test, tools
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth


The scope of this issue is to fix multiple file handle leaks observed from 
recent HDFS test runs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Cannot communicate

2013-04-22 Thread rkevinburton
I was able to add the appropriate Maven dependencies and it "works". I 
have one last question on this thread. With the added dependencies I am 
getting the warning:


13/04/22 11:53:18 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes 
where applicable

What does this mean? Can it be avoided?

Thanks again.


On Mon, Apr 22, 2013 at 11:17 AM, Kevin Burton wrote:


What dependency for the Maven project should I use?

On Apr 22, 2013, at 10:02 AM, Ted Yu  wrote:

The exception was due to incompatible RPC versions between Apache 
maven

artifacts and CDH4.

I suggest you build the project with same hadoop version as in your 
cluster.


On Mon, Apr 22, 2013 at 7:50 AM, Kevin Burton 
wrote:


I am relatively new to Hadoop and am working through a Manning 
publication
"Hadoop in Action". One of the first program in the book (page 44) 
gives me
a Java exception: org.apache.hadoop.ipc.RemoteException: Server IPC 
version

7 cannot communicate with client version 3.

My Hadoop distribution is CDH4. The Java Maven project takes its
dependency from Apache. The exception comes from a line involving 
the

"Configuration" class.

Any idea on how to avoid this exception?


Re: Cannot communicate

2013-04-22 Thread Vinod Kumar Vavilapalli

It means what it says: that hadoop native library isn't available for some 
reason. See http://hadoop.apache.org/docs/stable/native_libraries.html

Thanks,
+Vinod Kumar Vavilapalli
Hortonworks Inc.
http://hortonworks.com/

On Apr 22, 2013, at 9:58 AM, rkevinbur...@charter.net wrote:

> I was able to add the appropriate Maven dependencies and it "works". I have 
> one last question on this thread. With the added dependencies I am getting 
> the warning:
> 
> 13/04/22 11:53:18 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> What does this mean? Can it be avoided?
> 
> Thanks again.
> 
> 
> On Mon, Apr 22, 2013 at 11:17 AM, Kevin Burton wrote:
> 
>> What dependency for the Maven project should I use?
>> 
>> On Apr 22, 2013, at 10:02 AM, Ted Yu  wrote:
>> 
>>> The exception was due to incompatible RPC versions between Apache maven
>>> artifacts and CDH4.
>>> 
>>> I suggest you build the project with same hadoop version as in your cluster.
>>> 
>>> On Mon, Apr 22, 2013 at 7:50 AM, Kevin Burton 
>>> wrote:
>>> 
 I am relatively new to Hadoop and am working through a Manning publication
 "Hadoop in Action". One of the first program in the book (page 44) gives me
 a Java exception: org.apache.hadoop.ipc.RemoteException: Server IPC version
 7 cannot communicate with client version 3.
 
 My Hadoop distribution is CDH4. The Java Maven project takes its
 dependency from Apache. The exception comes from a line involving the
 "Configuration" class.
 
 Any idea on how to avoid this exception?



Re: Why failed to use Distcp over FTP protocol?

2013-04-22 Thread Daryn Sharp
I believe it should work…  What error message did you receive?

Daryn
 
On Apr 22, 2013, at 3:45 AM, sam liu wrote:

> Hi Experts,
> 
> I failed to execute following command, does not Distcp support FTP protocol?
> 
> hadoop distcp ftp://hadoopadm:@ftphostname/tmp/file1.txt
> hdfs:///tmp/file1.txt
> 
> Thanks!



Re: Cannot communicate

2013-04-22 Thread rkevinburton


I am on a Ubuntu server. When I go to the link you provided there is a 
hyperlink for Ubuntu but it seems like it is the main site. I tried 
searching for hadoop native but didn't get any useful results. Is there 
some other package that I should install using apt-get?


On Mon, Apr 22, 2013 at 12:02 PM, Vinod Kumar Vavilapalli wrote:

 >
It means what it says: that hadoop native library isn't available for 
some reason. See 
http://hadoop.apache.org/docs/stable/native_libraries.html 
   



Thanks,
+Vinod Kumar Vavilapalli
Hortonworks Inc.
http://hortonworks.com/    



On Apr 22, 2013, at 9:58 AM, rkevinbur...@charter.net 
 
  
 
 wrote:


I was able to add the appropriate Maven dependencies and it "works". I 
have one last question on this thread. With the added dependencies I 
am getting the warning:


13/04/22 11:53:18 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes 
where applicable

What does this mean? Can it be avoided?

Thanks again.


On Mon, Apr 22, 2013 at 11:17 AM, Kevin Burton wrote:


What dependency for the Maven project should I use?

On Apr 22, 2013, at 10:02 AM, Ted Yu    
 > 
wrote:


The exception was due to incompatible RPC versions between Apache 
maven

artifacts and CDH4.

I suggest you build the project with same hadoop version as in your 
cluster.


On Mon, Apr 22, 2013 at 7:50 AM, Kevin Burton 
 
  
 
>wrote:


I am relatively new to Hadoop and am working through a Manning 
publication
"Hadoop in Action". One of the first program in the book (page 44) 
gives me
a Java exception: org.apache.hadoop.ipc.RemoteException: Server IPC 
version

7 cannot communicate with client version 3.

My Hadoop distribution is CDH4. The Java Maven project takes its
dependency from Apache. The exception comes from a line involving 
the

"Configuration" class.

Any idea on how to avoid this exception?


Testing online one class

2013-04-22 Thread Mohammad Mustaqeem
I have seen the test folder in trunk. How to use these test code.
Like I want to test only TestReplicationPolicy. How to run this code?
-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270


Re: Testing online one class

2013-04-22 Thread Ted Yu
You can use the following command:

mvn test -Dtest=TestReplicationPolicy

Cheers

On Mon, Apr 22, 2013 at 10:47 AM, Mohammad Mustaqeem <3m.mustaq...@gmail.com
> wrote:

> I have seen the test folder in trunk. How to use these test code.
> Like I want to test only TestReplicationPolicy. How to run this code?
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>


Re: Hadoop Streaming job error - Need help urgent

2013-04-22 Thread Chris Nauroth
(Moving to user list, hdfs-dev bcc'd.)

Hi Prithvi,

>From a quick scan, it looks to me like one of your commands ends up using
"input_path" as a string literal instead of replacing with the value of the
input_path variable.  I've pasted the command below.  Notice that one of
the -file options used "input_path" instead of "$input_path".

Is that the problem?

Hope this helps,
--Chris



$hadoop_bin --config $hadoop_config jar $hadoop_streaming -D
mapred.task.timeout=0 -D
mapred.job.name="BC_N$((num_of_node))_M$((num_of_mapper))"
-D mapred.reduce.tasks=$num_of_reducer -input
input_BC_N$((num_of_node))_M$((num_of_mapper))
-output $output_path -file brandes_mapper -file src/mslab/BC_reducer.py
-file src/mslab/MapReduceUtil.py -file input_path -mapper "./brandes_mapper
$input_path $num_of_node" -reducer "./BC_reducer.py"



On Mon, Apr 22, 2013 at 10:11 AM, prithvi dammalapati <
d.prithvi...@gmail.com> wrote:

> I have the following hadoop code to find the betweenness centrality of a
> graph
>
> java_home=/usr/lib/jvm/java-1.7.0-openjdk-amd64
> hadoop_home=/usr/local/hadoop/hadoop-1.0.4
> hadoop_lib=$hadoop_home/hadoop-core-1.0.4.jar
> hadoop_bin=$hadoop_home/bin/hadoop
> hadoop_config=$hadoop_home/conf
>
> hadoop_streaming=$hadoop_home/contrib/streaming/hadoop-streaming-1.0.4.jar
> #task specific parameters
> source_code=BetweennessCentrality.java
> jar_file=BetweennessCentrality.jar
> main_class=mslab.BetweennessCentrality
> num_of_node=38012
> num_of_mapper=100
> num_of_reducer=8
> input_path=/data/dblp_author_conf_adj.txt
> output_path=dblp_bc_N$(($num_of_node))_M$((num_of_mapper))
> rm build -rf
> mkdir build
> $java_home/bin/javac -d build -classpath .:$hadoop_lib
> src/mslab/$source_code
> rm $jar_file -f
> $java_home/bin/jar -cf $jar_file -C build/ .
> $hadoop_bin --config $hadoop_config fs -rmr $output_path
> $hadoop_bin --config $hadoop_config jar $jar_file $main_class
> $num_of_node   $num_of_mapper
>
> rm brandes_mapper
>
> g++ src/mslab/mapred_brandes.cpp -O3 -o brandes_mapper
> $hadoop_bin --config $hadoop_config jar $hadoop_streaming -D
> mapred.task.timeout=0 -D 
> mapred.job.name="BC_N$((num_of_node))_M$((num_of_mapper))"
> -D mapred.reduce.tasks=$num_of_reducer -input
> input_BC_N$((num_of_node))_M$((num_of_mapper)) -output $output_path -file
> brandes_mapper -file src/mslab/BC_reducer.py -file
> src/mslab/MapReduceUtil.py -file input_path -mapper "./brandes_mapper
> $input_path $num_of_node" -reducer "./BC_reducer.py"
>
> When I run this code in a shell script, i get the following errors:
>
> Warning: $HADOOP_HOME is deprecated.
> File: /home/hduser/Downloads/mgmf/trunk/input_path does not exist, or
> is not readable.
> Streaming Command Failed!
>
> but the file exits at the specified path
>
> /Downloads/mgmf/trunk/data$ ls
> dblp_author_conf_adj.txt
>
> I have also added the input file into HDFS using
>
> /usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /source /destination
>
> Can someone help me solve this problem?
>
>
> Any help is appreciated,
> Thanks
> Prithvi
>


[jira] [Resolved] (HDFS-4708) Add snapshot user guide

2013-04-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4708.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)
 Hadoop Flags: Reviewed

Thanks Suresh for reviewing this.

I have committed this.

> Add snapshot user guide
> ---
>
> Key: HDFS-4708
> URL: https://issues.apache.org/jira/browse/HDFS-4708
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: h4708_20130420.html, h4708_20130420.patch, 
> h4708_20130422.patch
>
>
> The guide should include the snapshot semantic, snapshot API and snapshot 
> commands.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4724) Provide API for checking whether lease is recovered or not

2013-04-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HDFS-4724.
--

Resolution: Duplicate

Duplicate of HDFS-4525

> Provide API for checking whether lease is recovered or not
> --
>
> Key: HDFS-4724
> URL: https://issues.apache.org/jira/browse/HDFS-4724
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> recoverLease() returns a boolean indicating whether file has been 
> successfully finalized and closed. In case false is returned, client should 
> use another API to query whether lease is recovered or not.
> Necessity for this new API stems from the fact that recoverLease() 
> unconditionally enqueues a block for recovery. So if client calls 
> recoverLease() continuously, previous recovery attempts would be preempted.
> See HBASE-8389 and HDFS-4721 for such scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4726) Fix test failures after merging the mapping from INodeId to INode

2013-04-22 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4726:
---

 Summary: Fix test failures after merging the mapping from INodeId 
to INode
 Key: HDFS-4726
 URL: https://issues.apache.org/jira/browse/HDFS-4726
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4727) Update inodeMap after deleting files/directories/snapshots

2013-04-22 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4727:
---

 Summary: Update inodeMap after deleting files/directories/snapshots
 Key: HDFS-4727
 URL: https://issues.apache.org/jira/browse/HDFS-4727
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


This whole process is similar with updating blocksMap: while deleting 
files/directories/snapshots, we collect inodes that will no longer exist and 
update the inodeMap accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4726) Fix test failures after merging the mapping from INodeId to INode

2013-04-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4726.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I have committed this.  Thanks, Jing!

> Fix test failures after merging the mapping from INodeId to INode
> -
>
> Key: HDFS-4726
> URL: https://issues.apache.org/jira/browse/HDFS-4726
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4726.001.patch
>
>
> We have several test failures after merging HDFS-4434 from trunk. This jira 
> is created to fix the test failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: A Look at HDFS 4721

2013-04-22 Thread Varun Sharma
Sorry checking again if any committer can help take a look at

https://issues.apache.org/jira/browse/HDFS-4721

Thanks !


On Mon, Apr 22, 2013 at 9:37 AM, Varun Sharma  wrote:

> Hi,
>
> Can someone please take a look at HDFS 4721 ?
>
> Thanks
> Varun
>


[jira] [Created] (HDFS-4728) Snapshot tests broken after merge from trunk

2013-04-22 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-4728:
---

 Summary: Snapshot tests broken after merge from trunk
 Key: HDFS-4728
 URL: https://issues.apache.org/jira/browse/HDFS-4728
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)


Snapshot tests are broken after the recent merge from trunk. The likely cause 
of regression is the change to INodeDirectory.clearChildren which resets the 
children list while replacing an INodeDirectory with 
INodeDirectorySnapshottable. Testing a fix.

{code}
  public void clearChildren() {
if (children != null) {
  this.children.clear();
  this.children = null;
}
  }
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4728) Snapshot tests broken after merge from trunk

2013-04-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-4728.
-

Resolution: Duplicate

Too slow - dup'ed to HDFS-4726.

> Snapshot tests broken after merge from trunk
> 
>
> Key: HDFS-4728
> URL: https://issues.apache.org/jira/browse/HDFS-4728
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: Snapshot (HDFS-2802)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: Snapshot (HDFS-2802)
>
>
> Snapshot tests are broken after the recent merge from trunk. The likely cause 
> of regression is the change to INodeDirectory.clearChildren which resets the 
> children list while replacing an INodeDirectory with 
> INodeDirectorySnapshottable. Testing a fix.
> {code}
>   public void clearChildren() {
> if (children != null) {
>   this.children.clear();
>   this.children = null;
> }
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4727) Update inodeMap after deleting files/directories/snapshots

2013-04-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4727.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I have committed this.  Thanks, Jing!

> Update inodeMap after deleting files/directories/snapshots
> --
>
> Key: HDFS-4727
> URL: https://issues.apache.org/jira/browse/HDFS-4727
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4727.001.patch
>
>
> This whole process is similar with updating blocksMap: while deleting 
> files/directories/snapshots, we collect inodes that will no longer exist and 
> update the inodeMap accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4719) Minor simplifications to snapshot code

2013-04-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4719.
--

  Resolution: Fixed
Hadoop Flags: Reviewed

I has committed this.  Thanks, Arpit!

> Minor simplifications to snapshot code
> --
>
> Key: HDFS-4719
> URL: https://issues.apache.org/jira/browse/HDFS-4719
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: Snapshot (HDFS-2802)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4719.002.patch, HDFS-4719.003.patch, 
> HDFS-4719.004.patch
>
>
> Remove couple of unused snapshot functions and factor away the factory 
> classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4729) Update OfflineImageViewer and fix TestOfflineImageViewer

2013-04-22 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4729:
---

 Summary: Update OfflineImageViewer and fix TestOfflineImageViewer
 Key: HDFS-4729
 URL: https://issues.apache.org/jira/browse/HDFS-4729
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


The format of FSImage is updated after supporting rename with snapshots. We 
need to update OfflineImageViewer accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Why failed to use Distcp over FTP protocol?

2013-04-22 Thread sam liu
I encountered IOException and FileNotFoundException:

13/04/17 17:11:10 INFO mapred.JobClient: Task Id :
attempt_201304160910_2135_m_
00_0, Status : FAILED
java.io.IOException: The temporary job-output directory
ftp://hadoopadm:@ftphostname/tmp/_distcp_logs_i74spu/_temporary
doesn't exist!
at
org.apache.hadoop.mapred.FileOutputCommitter.getWorkPath(FileOutputCommitter.java:250)
at
org.apache.hadoop.mapred.FileOutputFormat.getTaskOutputPath(FileOutputFormat.java:244)
at
org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:116)
at
org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.(MapTask.java:820)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at
java.security.AccessController.doPrivileged(AccessController.java:310)
at javax.security.auth.Subject.doAs(Subject.java:573)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1144)
at org.apache.hadoop.mapred.Child.main(Child.java:249)


... ...

13/04/17 17:11:42 INFO mapred.JobClient: Job complete: job_201304160910_2135
13/04/17 17:11:42 INFO mapred.JobClient: Counters: 6
13/04/17 17:11:42 INFO mapred.JobClient:   Job Counters
13/04/17 17:11:42 INFO mapred.JobClient: Failed map tasks=1
13/04/17 17:11:42 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=33785
13/04/17 17:11:42 INFO mapred.JobClient: Launched map tasks=4
13/04/17 17:11:42 INFO mapred.JobClient: Total time spent by all
reduces waiting after reserving slots (ms)=0
13/04/17 17:11:42 INFO mapred.JobClient: Total time spent by all maps
waiting after reserving slots (ms)=0
13/04/17 17:11:42 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=6436
13/04/17 17:11:42 INFO mapred.JobClient: Job Failed: # of failed Map Tasks
exceeded allowed limit. FailedCount: 1. LastFailedTask:
task_201304160910_2135_m_00
With failures, global counters are inaccurate; consider running with -i
Copy failed: java.io.FileNotFoundException: File
ftp://hadoopadm:@ftphostname/tmp/_distcp_tmp_i74spu does not exist.
at
org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:419)
at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:302)
at org.apache.hadoop.fs.ftp.FTPFileSystem.delete(FTPFileSystem.java:279)
at org.apache.hadoop.tools.DistCp.fullyDelete(DistCp.java:963)
at org.apache.hadoop.tools.DistCp.copy(DistCp.java:672)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)


2013/4/23 sam liu 

> I encountered IOException and FileNotFoundException:
>
> 13/04/17 17:11:10 INFO mapred.JobClient: Task Id :
> attempt_201304160910_2135_m_00_0, Status : FAILED
> java.io.IOException: The temporary job-output directory
> ftp://hadoopadm:@ftphostname/tmp/_distcp_logs_i74spu/_temporary
> doesn't exist!
> at
> org.apache.hadoop.mapred.FileOutputCommitter.getWorkPath(FileOutputCommitter.java:250)
> at
> org.apache.hadoop.mapred.FileOutputFormat.getTaskOutputPath(FileOutputFormat.java:244)
> at
> org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:116)
> at
> org.apache.hadoop.mapred.MapTask$DirectMapOutputCollector.(MapTask.java:820)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> at
> java.security.AccessController.doPrivileged(AccessController.java:310)
> at javax.security.auth.Subject.doAs(Subject.java:573)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1144)
> at org.apache.hadoop.mapred.Child.main(Child.java:249)
>
>
> ... ...
>
> 13/04/17 17:11:42 INFO mapred.JobClient: Job complete:
> job_201304160910_2135
> 13/04/17 17:11:42 INFO mapred.JobClient: Counters: 6
> 13/04/17 17:11:42 INFO mapred.JobClient:   Job Counters
> 13/04/17 17:11:42 INFO mapred.JobClient: Failed map tasks=1
> 13/04/17 17:11:42 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=33785
> 13/04/17 17:11:42 INFO mapred.JobClient: Launched map tasks=4
> 13/04/17 17:11:42 INFO mapred.JobClient: Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 13/04/17 17:11:42 INFO mapred.JobClient: Total time spent by all maps
> waiting after reserving slots (ms)=0
> 13/04/17 17:11:42 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=6436
> 13/04/17 17:11:42 INFO mapred.JobClient: Job Failed: # of failed Map Tasks
> exceeded allowed limit. FailedCount: 1. LastFailedTask:
> task_201304160910_2135_m_00
> With failures, global counters are inaccurate; consider running with -i
> Copy 

[jira] [Created] (HDFS-4730) KeyManagerFactory.getInstance supports SunX509 &ibmX509 in HsftpFileSystem.java

2013-04-22 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4730:


 Summary: KeyManagerFactory.getInstance supports SunX509 &ibmX509 
in HsftpFileSystem.java
 Key: HDFS-4730
 URL: https://issues.apache.org/jira/browse/HDFS-4730
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tian Hong Wang
Assignee: Tian Hong Wang


In IBM java, SunX509 should be ibmX509. So use SSLFactory.SSLCERTIFICATE to 
load dynamically.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Testing online one class

2013-04-22 Thread Mohammad Mustaqeem
Will the output of this test is shown on terminal?


On Mon, Apr 22, 2013 at 11:22 PM, Ted Yu  wrote:

> You can use the following command:
>
> mvn test -Dtest=TestReplicationPolicy
>
> Cheers
>
> On Mon, Apr 22, 2013 at 10:47 AM, Mohammad Mustaqeem <
> 3m.mustaq...@gmail.com
> > wrote:
>
> > I have seen the test folder in trunk. How to use these test code.
> > Like I want to test only TestReplicationPolicy. How to run this code?
> > --
> > *With regards ---*
> > *Mohammad Mustaqeem*,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
>



-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270


For release 2.0.X, about when will have a stable release?

2013-04-22 Thread sam liu
Hi,

The current release of 2.0.X is 2.0.3-alpha, and about when will have a
stable release?

Sam Liu

Thanks!


Encounter 'error: possibly undefined macro: AC_PROG_LIBTOOL', when build Hadoop project in SUSE 11(x86_64)

2013-04-22 Thread sam liu
Hi Experts,

I failed to build Hadoop 1.1.1 source code project in SUSE 11(x86_64), and
encounter a issue:

 [exec] configure.ac:48: error: possibly undefined macro:
AC_PROG_LIBTOOL
 [exec]   If this token and others are legitimate, please use
m4_pattern_allow.
 [exec]   See the Autoconf documentation.
 [exec] autoreconf: /usr/local/bin/autoconf failed with exit status: 1

Even after installing libtool.x86_64 2.2.6b-13.16.1 on it, the issue still
exists.

Anyone knows this issue?

Thanks!

Sam Liu