hive users,
thought i'd ask - don't really have much hope though - is loading a table
from a file via the remote hive cli client possible? by remote hive cli
client i mean where one specifies -h and -p.
thanks,
Stephen
Hive likely wishes to format the data differently then Hadoop does.
Hive re-uses what it can. I would diff the two .java files and find
out for yourself :)
On Tue, Jun 12, 2012 at 5:20 PM, wrote:
> Hi Edward,
>
> Sorry, If I was not clear. My question is around difference between
> DoubleWrita
Hi Edward,
Sorry, If I was not clear. My question is around difference between
DoubleWritable in hadoop and hive, other writables from hadoop works fine in
hive.
Hive.serde types are limited to Double, Byte, Short and Timestamp.
I am using hive 0.8
Richin
-Original Message-
From: ext
If you use Double or double hive will automatically convert. I would
always recommend the hive.serde types.
Edward
On Tue, Jun 12, 2012 at 4:56 PM, wrote:
> Hi Guys,
>
>
>
> I am writing a UDF in hive to convert a double value to string, so the
> evaluate method of my UDF class looks like
>
>
>
Hi Guys,
I am writing a UDF in hive to convert a double value to string, so the evaluate
method of my UDF class looks like
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
//import org.apache.hadoop.io.DoubleWritable; - does not work
import org.apache.hadoop.hive.serd
Hi!
I'm trying to add unit tests to a UDF I developed, and used the commands
runCreateTableCmd and runLoadCmd to create and load the table. I'm fairly
certain it is loading from the correct filepath. When I try to run it though,
I get the error "Cannot overwrite read-only table." Does anybody
You're probably running into
https://issues.apache.org/jira/browse/HIVE-2334, which has been fixed
in Hive 0.8+.
On Sat, Jun 9, 2012 at 7:03 AM, Mark Grover wrote:
> Hi Saurabh,
> I wasn't able to reproduce this problem on Apache Hive 0.9.0. Can you please
> try the same procedure with another ja
Hi,
I'm having problems running current releases of Apache Hive, I get an
error:
java.lang.NoSuchMethodError:
org.apache.thrift.server.TThreadPoolServer.(Lorg/apache/thrift/server/TThreadPoolServer$Args;)V
I search a bit about this kind of problem and it seems I have an older
version of thrift
By instance I mean a set of mapreduce jobs (3 in this case)..when
executing in Cli only one instance runs and output is displayed on the
screen, but this is not the case when using with PowerPivot (multiple
instance one after the other and contains the same no. of HDFS read
write...etc)...a
There are multiple instances of 3 mpareduce jobs (executing one after the
other) on running the single query using powerpivot.
I can find out next instance when this throws up in the screen after like
2 instance of the 3mapreduce jobs.
Hive history
file=/tmp/hadoop/hive_job_log_hadoop_201206121120_
Yes understood. I do not have a problem in defining the parameters in the
code. But the problem is, I am using PowerPivot as the visualization engine.
Now, when I give the query as a set like:
add jar /usr/local/hadoop/src/retweetlink1.jar;
create temporary function link as
11 matches
Mail list logo