Amit,
Are you executing your select for conversion to orc via beeline, or hive
cli? From looking at your logs, it appears that you do not have permissions
in hdfs to write the resultant orc data. Check permissions in hdfs to
ensure that your user has write permissions to write to hive warehouse.
I checked out and build hive 0.13. Tried with same results. i.e.
eRpcServer.addBlock(NameNodeRpcServer.java:555)
at File
/tmp/hive-hduser/hive_2014-04-04_20-34-43_550_7470522328893486504-1/_task_tmp.-ext-10002/_tmp.00_3
could only be replicated to 0 nodes instead of minReplication (=1).
Turns out it was just a trivial/inane Hive-ism of setting the 'value' in a
particular way. *sigh*. The SO link(http://goo.gl/j9II0V) has details.
On Fri, Apr 4, 2014 at 12:41 PM, Decimus Phostle
wrote:
> Hello Folks,
>
> I have been having a few jobs failing due to OutOfMemory and GC overhead
>
Hi All,
I am just trying to do some simple tests to see speedup in hive query
with Hive 0.14 (trunk version this morning). Just tried to use sample
test case to start with. First wanted to see how much I can speed up
using ORC format.
However for some reason I can't insert data into the tabl
for non partitioned columns ans in one word: NO
detailed answer here: This feature is still being build as part of
https://issues.apache.org/jira/browse/HIVE-5317
On Sat, Apr 5, 2014 at 2:28 AM, Raj Hadoop wrote:
>
> Can I update ( delete and insert kind of) just one row keeping the
> remainin
I removed the count, but it still does not output the query results.
These are the parameters I set for hive:
set hive.stats.autogather=false
set hive.optimize.autoindex=true;
set hive.optimize.index.filter=true;
set hive.exec.parallel=true;
set mapred.reduce.tasks=5;
set hive.exec.reducers.max=5;
Can I update ( delete and insert kind of)just one row keeping the remaining
rows intact in Hive table using Hive INSERT OVERWRITE. There is no partition in
the Hive table.
INSERT OVERWRITE TABLE tablename SELECT col1,col2,col3 from tabx where
col2='abc';
Does the above work ? Please advise.
Hi All,
I am just trying to do some simple tests to see speedup in hive query
with Hive 0.14 (trunk version this morning). Just tried to use sample
test case to start with. First wanted to see how much I can speed up
using ORC format.
However for some reason I can't insert data into the tabl
You seem to have a dangling aggregate function in your SELECT:
SELECT
exposed_time,
ROUND(COUNT(ses_tx_20130805.pid)/10) ***COUNT***
FROM
tx_demography_info JOIN ses_tx_20130805
ON
(tx_demography_info.pid=ses_tx_20130805.pid)
WHERE
countyid='50015'
GROUP BY
exposed_time
ORDER BY
exposed_time;
O
Query : SELECT exposed_time, ROUND(COUNT(ses_tx_20130805.pid)/10) COUNT
FROM tx_demography_info join ses_tx_20130805 on
(tx_demography_info.pid=ses_tx_20130805.pid) where countyid='50015' GROUP
BY exposed_time ORDER BY exposed_time;
On Fri, Apr 4, 2014 at 4:14 PM, saquib khan wrote:
> I get j
Thanks Decimus.
Query:
SELECT exposed_time, ROUND(COUNT(ses_tx_20130805.pid)/10) COUNT FROM
tx_demography_info join ses_tx_20130805 on
(tx_demography_info.pid=ses_tx_20130805.pid) where countyid='50015' GROUP
BY exposed_time ORDER BY exposed_time;
On Fri, Apr 4, 2014 at 4:22 PM, Decimus Ph
It might help if you post details on the queries themselves.
On Fri, Apr 4, 2014 at 1:14 PM, saquib khan wrote:
> I get java exceptions while running the queries:
>
> java.lang.InstantiationException: org.antlr.runtime.CommonToken
> Continuing ...
> java.lang.RuntimeException: failed to evaluat
I get java exceptions while running the queries:
java.lang.InstantiationException: org.antlr.runtime.CommonToken
Continuing ...
java.lang.RuntimeException: failed to evaluate: =Class.new();
Continuing ...
java.lang.InstantiationException: org.antlr.runtime.CommonToken
Continuing ...
java.lang.Runt
Dear Folks,
Whenever I run join queries, it does not display the output, thought it
give me output for queries on single table.
Thanks and Regards,
Saky
Hello Folks,
I have been having a few jobs failing due to OutOfMemory and GC overhead
limit exceeded errors. To counter these I tried setting `SET
mapred.child.java.opts="-Xmx3G -XX:+UseConcMarkSweepGC";` at the start of
the hive script**.
Basically any time I add this option to the script, the M
I figured out the problem. The JSON SerDe I wrote is not case sensitive,
but the ORC and Parquet SerDes are case sensitive.
So this works:
select ClientCode, Encounter.Number from parquet_tbl;
but this does not:
select clientcode, encounter.Number from parquet_tbl;
-Michael
On Thu, Apr 3, 2014
Background:
I have some somewhat deeply nested types in a hive table that are causing
queries to error out when executing through JDBC. The same queries are
successful via the CLI and via hue beeswax clients. The JDBC error states
Number of levels of nesting supported for LazySimpleSerde is 7 Unab
17 matches
Mail list logo