Is there "useradd" in Hadoop

2011-03-22 Thread springring
Hi, There are "chmod"、"chown"、"chgrp" in HDFS, is there some command like "useradd -g" to add a user in a group,? Even more, is there "hadoop's group", not "linux's group"? Ring

how to create a group in hdfs

2011-03-22 Thread springring
Hi, how to create a user group in hdfs? hadoop fs -? Ring

Re: File formats in Hadoop

2011-03-22 Thread Weishung Chung
They are used in hadoop org.apache.hadoop.io.SequenceFile org.apache.hadoop.io.file.tfile.TFile On Tue, Mar 22, 2011 at 10:06 PM, Ryan Rawson wrote: > Curious, why do you mention "SequenceFile" and "TFile". Neither of > those are either in the hbase.io, and TFile is not used anywhere in > HB

Re: File formats in Hadoop

2011-03-22 Thread Ryan Rawson
Curious, why do you mention "SequenceFile" and "TFile". Neither of those are either in the hbase.io, and TFile is not used anywhere in HBase. -ryan On Sat, Mar 19, 2011 at 9:01 AM, Weishung Chung wrote: > I am browsing through the hadoop.io package and was wondering what other > file formats ar

Re: Is there any way to add jar when invoking hadoop command

2011-03-22 Thread Jeff Zhang
Another work around I can think of is that have my own copy of hadoop, and copy extra jars to my hadoop. But it result into more maintenance effort On Wed, Mar 23, 2011 at 9:19 AM, Jeff Zhang wrote: > Hi all, > > When I use command "hadoop fs -text" I need to add extra jar to CLASSPATH, > becaus

Is there any way to add jar when invoking hadoop command

2011-03-22 Thread Jeff Zhang
Hi all, When I use command "hadoop fs -text" I need to add extra jar to CLASSPATH, because there's custom type in my sequence file. One way is that copying jar to $HADOOP_HOME/lib But in my case, I am not administrator, so I do not have the permission to copy files under $HADOOP_HOME/lib Is there

[jira] [Created] (HADOOP-7207) fs member of FSShell is not really needed

2011-03-22 Thread Boris Shkolnik (JIRA)
fs member of FSShell is not really needed - Key: HADOOP-7207 URL: https://issues.apache.org/jira/browse/HADOOP-7207 Project: Hadoop Common Issue Type: Bug Reporter: Boris Shkolnik

[jira] [Created] (HADOOP-7206) Integrate Snappy compression

2011-03-22 Thread Eli Collins (JIRA)
Integrate Snappy compression Key: HADOOP-7206 URL: https://issues.apache.org/jira/browse/HADOOP-7206 Project: Hadoop Common Issue Type: New Feature Reporter: Eli Collins Google release Zippy as an o

Re: Unable to connect to the url

2011-03-22 Thread Allen Wittenauer
On Mar 17, 2011, at 10:00 PM, James Ram wrote: > Hi, > > I am using a standalone linux machine. Namenode and Datanode are running. > But when I try to access the UI in my browser its showing "unable to > connect" error. I know its a basic question please help me. I have given > below the configu

[jira] [Resolved] (HADOOP-5983) Namenode shouldn't read mapred-site.xml

2011-03-22 Thread Daryn Sharp (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp resolved HADOOP-5983. - Resolution: Cannot Reproduce > Namenode shouldn't read mapred-site.xml > ---

[jira] [Created] (HADOOP-7205) automatically determine JAVA_HOME on OS X

2011-03-22 Thread Daryn Sharp (JIRA)
automatically determine JAVA_HOME on OS X - Key: HADOOP-7205 URL: https://issues.apache.org/jira/browse/HADOOP-7205 Project: Hadoop Common Issue Type: Improvement Affects Versions: 0.22.0

Re: Sync-marker in uncompressed sequenceFile

2011-03-22 Thread Weishung Chung
Thanks, exciting works ! On Mon, Mar 21, 2011 at 3:07 PM, Chris Douglas wrote: > It's used to align input splits of the SequenceFile. A reader can > start at an arbitrary offset, then find the boundary of the next block > of records by looking for the sync marker defined in the header. -C > > On

Re: How to contribute to hadoop?

2011-03-22 Thread Steve Loughran
On 21/03/11 18:58, shant. wrote: Hi All, I m newbie to hadoop and very much interested to learn and contibute.. Please guide/show me the path , from where should i start? Basically i m a java programmer. I think it's best to start off as a user of Hadoop, with a problem you want to