Hi Xuefu,
Thanks for you response. Let me express more:
/**
* The watcher class which sets the de-register flag when the znode
corresponding to this server // 3. from the declare, it seems, the purpose
is to stop this hs2 instance forever, but this seems not correct in case the
znode
Hi Alan,
Thanks for your response. I have a couple of patches I would love to submit
to Hive, and I'm working on getting a real build environment set up so I can do
so.
Can I ask you if HCatClient is able to operate over the HiveServer2 thrift
api? I've found contradictory information all
In my experience having looked at way to many heap dumps from
hiveserver2 it always end up being a seriously over partitioned table
and a user who decided to do a full table scan basically requesting all
partitions. This often is by accident for example when using
unix_timestamp to convert date
Can you articulate further why HiveServer2 is not working in such an event?
What's current behavior and what's expected from an end user's standpoint?
Thanks,
Xuefu
On Mon, Oct 12, 2015 at 6:52 AM, Wangwenli wrote:
>
> now hiveserver2 has multiple instance register to zookeeper, if zookeeper
>
now hiveserver2 has multiple instance register to zookeeper, if zookeeper
recover from fault, the znode represent the hs2 instance got deleted(eg.
session timeout), the hiveserver2 will receive NodeDeleted event, in this
event, this hiveserver instance will unregister from zookeeper,su
getting this exception
Error: java.lang.RuntimeException:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:185)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache
Hi Sanjeev -
Did you try to change your query to explicitly specify par.*
create table sample_table AS select par.* from parquet_table par inner join
parquet_table_counter ptc ON ptc.user_id=par.user_id;
Thanks.
From: Sanjeev Verma
mailto:sanjeev.verm...@gmail.com>>
Reply-To: "user@hive.apach
I am creating a table from the two parquet partioned tables and getting the
error duplicate column. any idea whats going wrong here.
create table sample_table AS select * from parquet_table par inner
join parquet_table_counter ptc ON ptc.user_di=par.user_id;
FAILED: SemanticException [Error 100