Thanks for your answers.
I run my hive client console on a machine which doesn't belong to the hadoop
cluster.
I just changed "fs.default.name" and "mapred.job.tracker" in hive-site.xml
to connect the hadoop cluster.
I can create table and files are created on HDFS.
When I changed the hive.metast
Hi,
To my knowledge Hive currently only supports single byte separators in the
DELIMITED BY clause. So you can only pick one of the first 128 ASCII
characters.
Or use a custom Serde to map your data.
Jasper
Op 18 mei 2011 05:46 schreef "wd" het volgende:
> Hi,
>
> Can I use a spacial char like
Can you try as user hadoop ?
Cheers
On May 17, 2011, at 9:53 PM, jinhang du wrote:
> hi,
> The default value is "/user/hive/warehouse" in hive.site.xml. After I changed
> the directory to a path on HDFS, I got the exception.
>
> FAILED: Error in metadata: MetaException(message:Got exception
2011/5/18 jinhang du :
> hi,
> The default value is "/user/hive/warehouse" in hive.site.xml. After I
> changed the directory to a path on HDFS, I got the exception.
>
> FAILED: Error in metadata: MetaException(message:Got exception:
> org.apache.hadoop.security.
> AccessControlException org.apache.
Hi,
If path aleady exist in hdfs, then you need to perform following step.
$HADOOP_HOME/bin/hadoop fs -chmod g+w Exist_Path_hdfs
Example:
$HADOOP_HOME/bin/hadoop fs -chmod g+w /com/impetus/data
On Wed, May 18, 2011 at 1:25 AM, Ankit Jain wrote:
> Hi,
>
> At the time installation we perform
Hi,
At the time installation we perform following steps:
$HADOOP_HOME/bin/hadoop fs –mkdir /user/hive/warehouse
$HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
Replace /user/hive/warehouse with a new path.
Example:-
$HADOOP_HOME/bin/hadoop fs -mkdir /com/impetus/data
$HADOOP_HOME/
check your dfs.permissions in hdfs-site.xml, I am guessing it's set to true.
If that's the case and you point the hive warehouse dir to an existing path
in hdfs the chances are the user that run's the hive jobs does not have
permissions on that path.
-Viral
On Tue, May 17, 2011 at 9:53 PM, jinhan
hi,
The default value is "/user/hive/warehouse" in hive.site.xml. After I
changed the directory to a path on HDFS, I got the exception.
FAILED: Error in metadata: MetaException(message:Got exception:
org.apache.hadoop.security.
AccessControlException org.apache.hadoop.security.AccessControlExcepti
Hi,
Can I use a spacial char like '^B' as the split pattern ?
I've tried '\002', '^B', '0x02', all failed.
Hi, I configured an Hadoop cloud with Hadoop 0.20.2-cdh3u0.
I extracted some tables from an Oracle DB with sqoop + OraOOp plugin an
distributed the data on my nodes. I am using Hive (hive-0.7.0-cdh3u0 )
to perform some analysis on my data.
However I am encountering an issue that I can't figure out
There is a new patch for optimizing partition pruning, including CPU and
memory. I think it is not in 0.7 yet. Can you try trunk and see how much memory
you need?
BTW 72k partitions is indeed quite a large number. When I did the experiments
with the new patch, you'll need about 300MB for 20k p
Hi,
I was running on a 0.7 trunk build from February 2011 until last Friday and
upgraded to trunk again then.
Things works ok except memory usage when doing queries with large number of
partitions is quite dramatically up.
I could query 12 months of data in one table with ~72k of partitions with
Hi all,
I find that, to implement such a statement, there are mutiple source files
related. GenericUDFCase.java, GenericUDFWhen.java etc. I want to know how they
are related. Could anyone give me some ideas about this? And is it possible to
implement them with UDF? thanks!
Best Regards,
ZL.
Thanks Ashish. Now using Java for implementation.
Regards,
Tamil
On Thu, May 12, 2011 at 12:30 AM, Ashish Thusoo wrote:
> With streaming, UDF or UDTFs you would get almost any kind of control flow
> you want without having those features implemented in Hive proper. For udf,
> udaf or udtf you
If its 0.7 and "IOException: The system cannot find the path specified"
then you ran into HIVE-2054. It seems Carl backported it to 0.7.1 so try
that.
If it's something else please post the error.
On 05/17/2011 04:56 AM, Raghunath, Ranjith wrote:
I have followed the document outlining how to
15 matches
Mail list logo