Re: HDFS Client under Windows

2009-10-09 Thread Jeff Hammerbacher
Hey Tobi,
Some folks from the Condor team at Wisconsin claimed to have this working
(see https://issues.apache.org/jira/browse/HDFS-573), but never posted a
patch. Perhaps you could contact them and see if they have the patch ready
for public consumption?

Regards,
Jeff

On Thu, Oct 8, 2009 at 7:50 PM, Tobias N. Sasse  wrote:

> Hey Guys,
>
> I am wondering if the HDFS Client Libs will run under Windows. I saw that
> in the sourcecode you use some shell commands to determine which user is
> running the client etc.
>
> Use-case is to have a Java App on Windows (XP, Server 2003 and upwards) to
> read files via the Filesystem API.
>
> If not, are there plans to implement that?
>
> Thanks,
> Tobi
>


Re: HDFS Client under Windows

2009-10-09 Thread Tobias N. Sasse

Thanks for that Info!

I have to work on the stable version, thus I need to know if this is 
possible with the actual version (and if yes - which tweaks are necessary).


What I figured out is that you can specify the user/group login name 
upfront within the conf, thus the hdfs client does not have to execute 
any "whois" command or unix specific stuff.


So I guess if there is a patch it will be working with future releases. 
But what about 0.18, 0.19 and 0.20?


Thanks for all further input on that,

Tobi


Jeff Hammerbacher wrote:

Hey Tobi,
Some folks from the Condor team at Wisconsin claimed to have this working
(see https://issues.apache.org/jira/browse/HDFS-573), but never posted a
patch. Perhaps you could contact them and see if they have the patch ready
for public consumption?

Regards,
Jeff

On Thu, Oct 8, 2009 at 7:50 PM, Tobias N. Sasse  wrote:

  

Hey Guys,

I am wondering if the HDFS Client Libs will run under Windows. I saw that
in the sourcecode you use some shell commands to determine which user is
running the client etc.

Use-case is to have a Java App on Windows (XP, Server 2003 and upwards) to
read files via the Filesystem API.

If not, are there plans to implement that?

Thanks,
Tobi




  




Re: HDFS Client under Windows

2009-10-09 Thread Eli Collins
Hey Tobi,

Could you run the client using cygwin?

Thanks,
Eli


On Fri, Oct 9, 2009 at 12:30 AM, Tobias N. Sasse  wrote:
> Thanks for that Info!
>
> I have to work on the stable version, thus I need to know if this is
> possible with the actual version (and if yes - which tweaks are necessary).
>
> What I figured out is that you can specify the user/group login name upfront
> within the conf, thus the hdfs client does not have to execute any "whois"
> command or unix specific stuff.
>
> So I guess if there is a patch it will be working with future releases. But
> what about 0.18, 0.19 and 0.20?
>
> Thanks for all further input on that,
>
> Tobi
>
>
> Jeff Hammerbacher wrote:
>>
>> Hey Tobi,
>> Some folks from the Condor team at Wisconsin claimed to have this working
>> (see https://issues.apache.org/jira/browse/HDFS-573), but never posted a
>> patch. Perhaps you could contact them and see if they have the patch ready
>> for public consumption?
>>
>> Regards,
>> Jeff
>>
>> On Thu, Oct 8, 2009 at 7:50 PM, Tobias N. Sasse  wrote:
>>
>>
>>>
>>> Hey Guys,
>>>
>>> I am wondering if the HDFS Client Libs will run under Windows. I saw that
>>> in the sourcecode you use some shell commands to determine which user is
>>> running the client etc.
>>>
>>> Use-case is to have a Java App on Windows (XP, Server 2003 and upwards)
>>> to
>>> read files via the Filesystem API.
>>>
>>> If not, are there plans to implement that?
>>>
>>> Thanks,
>>> Tobi
>>>
>>>
>>
>>
>
>


Re: HDFS Client under Windows

2009-10-09 Thread Tobias N. Sasse

Hey Eli,

no the requirement is plain native Windows :-/

Tobi

Eli Collins wrote:

Hey Tobi,

Could you run the client using cygwin?

Thanks,
Eli


On Fri, Oct 9, 2009 at 12:30 AM, Tobias N. Sasse  wrote:
  

Thanks for that Info!

I have to work on the stable version, thus I need to know if this is
possible with the actual version (and if yes - which tweaks are necessary).

What I figured out is that you can specify the user/group login name upfront
within the conf, thus the hdfs client does not have to execute any "whois"
command or unix specific stuff.

So I guess if there is a patch it will be working with future releases. But
what about 0.18, 0.19 and 0.20?

Thanks for all further input on that,

Tobi


Jeff Hammerbacher wrote:


Hey Tobi,
Some folks from the Condor team at Wisconsin claimed to have this working
(see https://issues.apache.org/jira/browse/HDFS-573), but never posted a
patch. Perhaps you could contact them and see if they have the patch ready
for public consumption?

Regards,
Jeff

On Thu, Oct 8, 2009 at 7:50 PM, Tobias N. Sasse  wrote:


  

Hey Guys,

I am wondering if the HDFS Client Libs will run under Windows. I saw that
in the sourcecode you use some shell commands to determine which user is
running the client etc.

Use-case is to have a Java App on Windows (XP, Server 2003 and upwards)
to
read files via the Filesystem API.

If not, are there plans to implement that?

Thanks,
Tobi



  



  




[jira] Created: (HDFS-688) Add configuration resources to DFSAdmin

2009-10-09 Thread Konstantin Shvachko (JIRA)
Add configuration resources to DFSAdmin
---

 Key: HDFS-688
 URL: https://issues.apache.org/jira/browse/HDFS-688
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
 Fix For: 0.21.0


DFSAdmin run as a standalone app (via main()) does not load hdfs configuration 
files, which results that it tries to connect to local file system. Possible 
solution is to explicitly specify default configuration resources.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-689) Destination ending with \ should be treated as a dir

2009-10-09 Thread Rajiv Chittajallu (JIRA)
Destination ending with \ should be treated as a dir


 Key: HDFS-689
 URL: https://issues.apache.org/jira/browse/HDFS-689
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1
Reporter: Rajiv Chittajallu


When the destination  for  fs -mv and fs -cp  ends with a trailing '/' the it 
should be treated as a directory. The command should fail if the destination 
directory doesn't exist.

$ cp motd t/
cp: cannot create regular file `t/': Is a directory
$ mv motd t/
mv: cannot move `motd' to `t/': Not a directory

vs

$ hadoop dfs -cp motd c/
$ hadoop dfs -ls c
Found 1 items
-rw---   3 rajive users206 2009-10-09 20:59 /user/rajive/c
$ hadoop dfs -mv motd t/
$ hadoop dfs -ls t
Found 1 items
-rw---   3 rajive users206 2009-10-09 20:47 /user/rajive/t
$ 

$ hadoop dfs -mkdir a
$ hadoop dfs -mv motd2 a/
$ hadoop dfs -ls a
Found 1 items
-rw---   3 rajive users206 2009-10-09 20:47 /user/rajive/a/motd2
$ 




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-690) TestAppend2#testComplexAppend failed on "Too many open files"

2009-10-09 Thread Hairong Kuang (JIRA)
TestAppend2#testComplexAppend failed on "Too many open files"
-

 Key: HDFS-690
 URL: https://issues.apache.org/jira/browse/HDFS-690
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Hairong Kuang
Priority: Blocker
 Fix For: 0.21.0


the append write failed on "Too many open files":
Some bytes were failed to append to a file on the following error:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=24, 
Too many open files
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at java.lang.Runtime.exec(Runtime.java:593)
at java.lang.Runtime.exec(Runtime.java:466)
at 
org.apache.hadoop.fs.FileUtil$HardLink.getLinkCount(FileUtil.java:644)
at 
org.apache.hadoop.hdfs.server.datanode.ReplicaInfo.unlinkBlock(ReplicaInfo.java:205)
at 
org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1075)
at 
org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1058)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:110)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:258)
at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:382)
at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:323)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-691) Limitation on java.io.InputStream.available()

2009-10-09 Thread Tsz Wo (Nicholas), SZE (JIRA)
Limitation on java.io.InputStream.available()
-

 Key: HDFS-691
 URL: https://issues.apache.org/jira/browse/HDFS-691
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Tsz Wo (Nicholas), SZE


java.io.InputStream.available() returns an int which has the max value 2^31-1 = 
2GB - 1B.  It won't work if the number of available bytes is >= 2GB.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-692) Add simulated data node cluster start/stop commands in hadoop-dameon.sh .

2009-10-09 Thread Ravi Phulari (JIRA)
Add simulated data node cluster start/stop commands in hadoop-dameon.sh .
-

 Key: HDFS-692
 URL: https://issues.apache.org/jira/browse/HDFS-692
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ravi Phulari


Currently there are no commands supported for starting or stopping simulated 
data node clusters.

To start simulated data node cluster we need to export extra class paths 
required for DataNodeCluster.
 
{noformat}

bin/hadoop-daemon.sh start org.apache.hadoop.hdfs.DataNodeCluster  -simulated 
-n $DATANODE_PER_HOST -inject $STARTING_BLOCK_ID $BLOCKS_PER_DN  

{noformat}

{noformat}

bin/hadoop-daemon.sh stop org.apache.hadoop.hdfs.DataNodeCluster  -simulated  

{noformat}

For better user interface we should add DataNodeCluster start stop option in 
hadoop-daemon.sh



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.