1
4.0.0
Hive release version 4.0.0
But when I try the to connect to metastore,
*$HIVE_HOME/bin/hive --service metastore &*
I get error
Exception in thread "main"
em on
the thread. I tried to test MV + Iceberg on the official Docker image
but couldn't produce the error. I am guessing the file paths in the
error message would indicate you might not be using Iceberg.
To Lisoda,
I believe HIVE-28428 resolved your issue. That sounds great! If you
sti
;> I am new to Hive and Tez and I have struggled to deploy a high-performance
>>> Dockerized Hive setup. I followed the documentation for setting up a remote
>>> Metastore. I have a single node with 32 GB of RAM and 8 cores, but I have a
>>> dataset of about 2 GB (
Thanks Lisoda for those insights.
@Okumin , this is what I observed when checking the log files.
Attached is a log file and the hive-site.xml file configuration.
I have observed this error comes when the execution engine is set to Tez ,
the moment i switch to MR the issue does not come up
ataset of about 2 GB (Iceberg table partitioned on one column). However,
>> when I run select queries, the performance has not been as fast as expected.
>> Could someone share some insights, especially regarding hive-site.xml and
>> Tez custom configuration?
>>
>> Any h
owever, when I run select queries, the performance has not been
> as fast as expected. Could someone share some insights, especially
> regarding hive-site.xml and Tez custom configuration?
>
> Any help would be appreciated.
>
> On Sun, Aug 4, 2024 at 4:46 PM Okumin wrote:
>
>>
On Sun, Aug 4, 2024 at 4:46 PM Okumin wrote:
Hi Clinton,
I tested MERGE INTO with minimal reproduction. I saw the same error.
```
CREATE TABLE src (col1 INT, col2 INT);
CREATE TABLE dst (id BIGINT DEFAULT SURROGATE_KEY(), col1 INT, col2
INT, PRIMARY KEY (id) DISABLE NOVALIDATE) STORED BY ICEBERG;
t;
> I tested MERGE INTO with minimal reproduction. I saw the same error.
>
> ```
> CREATE TABLE src (col1 INT, col2 INT);
> CREATE TABLE dst (id BIGINT DEFAULT SURROGATE_KEY(), col1 INT, col2
> INT, PRIMARY KEY (id) DISABLE NOVALIDATE) STORED BY ICEBERG;
>
> MERGE INTO dst d
Hi Clinton,
I tested MERGE INTO with minimal reproduction. I saw the same error.
```
CREATE TABLE src (col1 INT, col2 INT);
CREATE TABLE dst (id BIGINT DEFAULT SURROGATE_KEY(), col1 INT, col2
INT, PRIMARY KEY (id) DISABLE NOVALIDATE) STORED BY ICEBERG;
MERGE INTO dst d USING src s ON s.col1
Dear Team,
Any help will be much appreciated.
Error SQL Error [4] [42000]: Error while compiling statement: FAILED:
SemanticException Schema of both sides of union should match.
I have an ETL workload that stores data into temp_table with the schema as
shown below.
CREATE EXTERNAL TABLE IF
mapped to “hive” user at MSS. But as per the below logs, exactly at that
stage it is throwing no rules applied error.
# One another event noticed from the logs, is that SASL related error,
regarding there keys.
*ERROR OBSERVED AT MSS SERVER LOGS:*
2023-08-09T17:15:52,187 INFO [pool-6-thread-200
Can you provide more information?
what kind of error?
what sql sentence?
Cheers
On Mon, Aug 22, 2022, 18:10 qq <987626...@qq.com> wrote:
> Hello:
>
> In HIve on Tez mode, an error occurs when the select statement is
> executed with hive.optimize.bucketmapjoin.sortedmerg
Hello??
In HIve on Tez mode, an error occurs when the select statement is
executed with hive.optimize.bucketmapjoin.sortedmerge=true.
Does anyone have a similar problem?
Thank you
;>
>>>> 0: jdbc:hive2://localhost:1/default> set
>>>> mapreduce.map.java.opts=-Xmx1024m;
>>>>
>>>> No rows affected (0.01 seconds)
>>>>
>>>> 0: jdbc:hive2://localhost:1/default> set
>>>> mapreduce.reduce.m
/localhost:1/default> set
>>> mapreduce.map.java.opts=-Xmx1024m;
>>>
>>> No rows affected (0.01 seconds)
>>>
>>> 0: jdbc:hive2://localhost:1/default> set
>>> mapreduce.reduce.memory.mb=1024;
>&
dbc:hive2://localhost:1/default> set
>> mapreduce.reduce.memory.mb=1024;
>>
>> No rows affected (0.014 seconds)
>>
>> 0: jdbc:hive2://localhost:1/default> set
>> mapreduce.reduce.java.opts=-Xmx1024m;
>>
>> No rows affected (0.015 seconds)
onds)
>
> 0: jdbc:hive2://localhost:1/default> set
> mapreduce.reduce.memory.mb=1024;
>
> No rows affected (0.014 seconds)
>
> 0: jdbc:hive2://localhost:1/default> set
> mapreduce.reduce.java.opts=-Xmx1024m;
>
> No rows affected (0.015 seconds)
>
>
b,count(*) as dd from ppl
group by job limit 10;
Error: Error while processing statement: FAILED: Execution Error, return
code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
(state=08S01,code=2)
Sorry my test VM has 2gb Ram only. So I set all the above memory size to
1GB.
But it stil
I assume you have to increase container size (if using tez/yarn)
Missatge de Bitfox del dia dt., 29 de març 2022 a les
14:30:
> My hive run out of memory even for a small query:
>
> 2022-03-29T20:26:51,440 WARN [Thread-1329] mapred.LocalJobRunner:
> job_local300585280_0011
>
> java.lang.Excepti
My hive run out of memory even for a small query:
2022-03-29T20:26:51,440 WARN [Thread-1329] mapred.LocalJobRunner:
job_local300585280_0011
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
~[hadoop-
7;,
'serialization.format'='\;')
Error: Error while compiling statement: FAILED: ParseException line 114:21
mismatched input '' expecting StringLiteral near '=' in specifying
key/value property (state=42000,code=4)
Thanks,
Bruno Kim
De: Bruno Kim Med
foo INT COMMENT '0 = ok; 1 = fail'
);
-- source_create_tab.hql
source create_tab.hql;
If I run "beeline -f create_tab.hql', it works OK. However, if I run 'beeline
-f source_create_tab.hql', it fails with an error:
Error: Error while compiling statement: FAILED: P
Thanks for the reply, Zoltan.
I found the error from the reducer task attempt log exactly like below;
https://issues.apache.org/jira/plugins/servlet/mobile#issue/TEZ-4071
(
https://issues.apache.org/jira/plugins/servlet/mobile#issue/TEZ-3894 )
They say the error is resolved at Tez 0.9.2 but I
Hey Eugene!
I don't see any hints in these outputs what could be the issue...have you
checked the tez container logs?
cheers,
Zoltan
On 7/1/20 9:58 AM, Eugene Chung wrote:
Hi,
I want to know how to investigate the count(*) query error on Hive 3.1.2 & Tez
0.9.2, which is 'be
Hi,
I want to know how to investigate the count(*) query error on Hive 3.1.2 &
Tez 0.9.2, which is 'being failed for too many output errors' in the Mapper.
The query is just simple like "select count(*) from MY_DB.ORC_TABLE where
part_date='2020-06-30';" wher
apache.org
Betreff: Error with hive-staging.staging
Hello,
we are having trouble with a query which exits infrequently:
org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.close(MergeFileRecordProcessor.java:180)\n\t...
17 more\nCaused by: java.io.IOException:
hdfs://ourserver:8020/a
Hello,
we are having trouble with a query which exits infrequently:
org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.close(MergeFileRecordProcessor.java:180)\n\t...
17 more\nCaused by: java.io.IOException:
hdfs://ourserver:8020/apps/hive/warehouse/our_schema.db/our_table/.hive-stag
on Tez - ERROR on SQL query
Hi.
I’m trying to make LLAP on HDP 3.1.4 with Hive 3.1.0 and Kerberos enabled works.
When I run a SQL query like
select count(*) from database group by column;
I've got the following error:
Caused by: java.lang.IllegalStateException
Hi.
I’m trying to make LLAP on HDP 3.1.4 with Hive 3.1.0 and Kerberos enabled
works.
When I run a SQL query like
select count(*) from database group by column;
I've got the following error:
Caused by: java.lang.IllegalStateException
Hi, everyone.
We're getting this error on Hive 3.1.0:
org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with
the metastore
at
org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidWriteIds(DbTxnManager.java:714)
at org.apache.hadoop.hive.ql.Driver.recordValidWri
, 2020 at 5:05 PM Souvikk Roy wrote:
> Hello,
>
> We are using some external tables backed by aws S3. And we are
> intermittently getting this error, most likely at the last stage of the
> reduce, I see some similar posts in net but could not find any solution, Is
> there a
Check this thread:
https://forums.aws.amazon.com/thread.jspa?messageID=922594
From: Souvikk Roy
Sent: Tuesday, February 4, 2020 3:06 AM
To: user@hive.apache.org
Subject: rename output error during hive query on AWSs3-external table
Hello,
We are using some external tables backed by aws S3
Hello,
We are using some external tables backed by aws S3. And we are
intermittently getting this error, most likely at the last stage of the
reduce, I see some similar posts in net but could not find any solution, Is
there any way yo solve it:
org.apache.hadoop.hive.ql.metadata.HiveException
"user@hive.apache.org"
Date: Saturday, November 30, 2019 at 1:40 AM
To: "user@hive.apache.org"
Subject: hive error: "Too many bytes before delimiter: 2147483648"
Hello all,
I encountered a problem while using hive, please ask you. The following is the
specific situ
Hello all,I encountered a problem while using hive, please ask you. The
following is the specific situation.
Platform: hive on spark
Error: java.io.IOException: Too many bytes before delimiter: 2147483648
Description: When using small files of the same format for testing, there is no
problem
>>> however the query was taking huge time to retrieve a few records.
>>>
>>> May I know what steps can I take to make this kind of query performance
>>> better? I mean the predicates which does not have partitioning.
>>>
>>> Thanks,
>>> S
tion. Yes, that helped to start the query,
>> however the query was taking huge time to retrieve a few records.
>>
>> May I know what steps can I take to make this kind of query performance
>> better? I mean the predicates which does not have partitioning.
>>
>> Thank
tioning.
>
> Thanks,
> Sai.
>
> On Thu, Nov 14, 2019 at 12:43 PM Pau Tallada wrote:
>
>> Hi,
>>
>> The error is from the AM (Application Master), because it has s
>> many partitions to orchestrate that needs lots of RAM.
>> As Venkat said, try
Thu, Nov 14, 2019 at 12:43 PM Pau Tallada wrote:
> Hi,
>
> The error is from the AM (Application Master), because it has s
> many partitions to orchestrate that needs lots of RAM.
> As Venkat said, try increasing tez.am.resource.memory.mb to 2G, even 4 or
> 8 might be
Hi,
The error is from the AM (Application Master), because it has s
many partitions to orchestrate that needs lots of RAM.
As Venkat said, try increasing tez.am.resource.memory.mb to 2G, even 4 or 8
might be needed.
Cheers,
Pau.
Missatge de Sai Teja Desu del dia dj.,
14 de nov. 2019 a
Thanks for the reply Venkatesh. I did tried to increase the tez container
size to 4GB but still giving me the same error. In addition, below are the
settings I have tried:
set mapreduce.map.memory.mb=4096;
set mapreduce.map.java.opts=-Xmx3686m;
set mapreduce.reduce.memory.mb=8192;
set
Try increasing the AM Container memory. set it to 2 gigs may be.
Regards,
Venkat
On Thu, Nov 14, 2019, 6:46 AM Sai Teja Desu <
saiteja.d...@globalfoundries.com> wrote:
> Hello All,
>
> I'm new to hive development and I'm memory limitation error for running a
> s
Hello All,
I'm new to hive development and I'm memory limitation error for running a
simple query with a predicate which should return only a few records. Below
are the details of the Hive table, Query and Error. Please advise me on how
to efficiently query on predicates which doe
Turns out I was using the wrong JAR to provide the base classes for LlapDaemon.
Removing hadoop-client-* from the classpath and using hadoop-common instead
fixed this problem.
From: Aaron Grubb
Sent: Monday, November 11, 2019 1:11 PM
To: user@hive.apache.org
Subject: LLAP/Protobuffers Error
Hello all,
I'm running a LLAP daemon through YARN + ZK. The container for a Hive query
begins to execute but there's a class cast error that I don't know how to
debug. Here's the logs:
cat syslog_dag_
---
...
2019-11-1
]0%
ELAPSED TIME: 506.91 s
I am sure LLAP daemon and slider-appmaster daemon are normal.
I look into tez am log in tez-ui and didn't even found any Exception. I am
sooo confused where are the Exceptions or error infos about this?
Does someone else met the same question?
Maria.
mal.
I look into tez am log in tez-ui and didn't even found any Exception. I am
sooo confused where are the Exceptions or error infos about this?
Does someone else met the same question?
Maria.
Hi,
Thanks my Hadoop is
hadoop version
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of
HADOOP_PREFIX.
*Hadoop 3.1.0*
Source code repository https://github.com/apache/hadoop -r
16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
Compiled by centos on 2018-03-30T00:00Z
Compiled with pr
Hi,
> java.lang.NoSuchMethodError:
> org.apache.hadoop.fs.FileStatus.compareTo(Lorg/apache/hadoop/fs/FileStatus;)I
> (state=,code=0)
Are you rolling your own Hadoop install?
https://issues.apache.org/jira/browse/HADOOP-14683
Cheers,
Gopal
Apache Hive
0: jdbc:hive2://rhes75:10099/default> use accounts;
No rows affected (0.011 seconds)
0: jdbc:hive2://rhes75:10099/default> Select TransactionDate, DebitAmount,
CreditAmount, Balance from ll_18740868 limit 3;
Error: java.io.IOException: java.lang.RuntimeException: ORC split
generati
Hi All,
I'm new to Hive and did this tutorial:
https://cwiki.apache.org/confluence/display/Hive/GettingStarted
After the "Running Hive CLI" step I've decided to run the "show tables;"
command and got this error:
FAILED: HiveException java.lang.RuntimeEx
i.
>
> Just an update, it is working when I use the Default HiveServer JDBC URL.
> The error occurs when I use LLAP.
>
> Regards,
> Bernard
>
>> On Fri, Jul 5, 2019 at 10:40 AM Bernard Quizon
>> wrote:
>> Hi.
>>
>> So I created a GenericUDF th
Hi.
Just an update, it is working when I use the Default HiveServer JDBC URL.
The error occurs when I use LLAP.
Regards,
Bernard
On Fri, Jul 5, 2019 at 10:40 AM Bernard Quizon <
bernard.qui...@cheetahdigital.com> wrote:
> Hi.
>
> So I created a GenericUDF that returns a map, i
qui...@cheetahdigital.com> wrote:
> Hi.
>
> So I created a GenericUDF that returns a map, it works fine on simple
> SELECT statements.
> *For example:*
> SELECT member_id, map_merge(src_map, dest_map, array('key1')) from
> test_table limit 100;
>
> But returns a
Hi.
So I created a GenericUDF that returns a map, it works fine on simple
SELECT statements.
*For example:*
SELECT member_id, map_merge(src_map, dest_map, array('key1')) from
test_table limit 100;
But returns an error when I use it on JOINs, for example:
SELECT
cust100.map_mer
r"))
val hiveConnection = DriverManager.getConnection(s"jdbc:hive2:///", "", "")
val stmt = hiveConnection.createStatement
stmt.execute(s"CREATE DATABASE IF NOT EXISTS tmp")
stmt.execute(
s"""|CREATE TABLE IF NOT EXISTS tmp.test_table(
Hello,
I am currently using the HiveMetaStore Java Client to connect to hive's
metastore and get metadata about hive tables. I am getting the following
error messages occasionally, and having to have to restart the code for it
to get working again.
Got exce
> ,row_number() over ( PARTITION BY A.dt,A.year, A.month,
>A.bouncer,A.visitor_type,A.device_type order by A.total_page_view_time desc )
>as rank
from content_pages_agg_by_month A
The row_number() window function is a streaming function, so this should not
consume a significant p
and Level 3, Cybercity, Magarpatta, Hadapsar Pune, Maharashtra, 411
013
off: +91-20-30418810
[Description: untitled]
"When the solution is simple, God is answering…"
From: Jörn Franke
Sent: 10 January 2019 PM 01:29
To: user@hive.apache.org
Cc: Shashikant Deore
Subject: Re: Out Of
:57 schrieb Sujeet Pardeshi :
>
> Hi Pals,
> I have the below Hive SQL which is hitting the following error “at
> java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError:
> Java heap space at”. It’s basically going out of memory. The table on which
> the q
Hi Pals,
I have the below Hive SQL which is hitting the following error "at
java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError:
Java heap space at". It's basically going out of memory. The table on which the
query is being hit has 246608473 (246 millio
Hi,
Does Hive 2.3.4 support Hadoop 2.8.x?
When compiling Hive 2.3.4 on Hadoop 2.8.5, I got the following errors:
Finding:
1) Hive 2.3.4 on Hadoop 2.7.7 no problem
2) Hive 2..3.4 on Hadoop 2.8.5, 7 errors
[ERROR]
/apache-hive-2.3.4-src/shims/common/src/main/test/org/apache/hadoop/hive/io
er2
> (HiveServer2.java:startHiveServer2(581)) - Error starting HiveServer2 on
> attempt 1, will retry in 6ms
>
> java.lang.RuntimeException:
> org.apache.hadoop.hive.ql.metadata.HiveException:
> java.lang.RuntimeException:
> org.apache.hadoop.hive.ql.metadata.HiveException:
Yes, it works. Thank you very much,
Garry
From: Suresh Kumar Sethuramaswamy
Reply-To: "user@hive.apache.org"
Date: Wednesday, November 7, 2018 at 3:10 PM
To: "user@hive.apache.org"
Subject: Re: Create external table with s3 location error
Thanks for the logs. Couple of th
Hi Suresh,
I am using Hive 1.1.0-cdh5.14.4 and hive server log as below.
2018-11-07 19:43:16,581 WARN [main]: server.HiveServer2
(HiveServer2.java:startHiveServer2(581)) - Error starting HiveServer2 on
attempt 1, will retry in 6ms
java.lang.RuntimeException
> reboot the server. Any suggestion?
>
>
>
> hive> create external table kv (key int, values string) location
> 's3://cu-iclick/test';
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:java.lang.NullPointerException)
>
>
>
> Garry
>
#x27;;
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:java.lang.NullPointerException)
Garry
d hive3.0 and tried it. I created a table , but can’t
>> insert a record into it. Below is the operation and error info, any hint
>> will be very appreciated!
>>
>> hive> create table ii(a int);
>> OK
>> Time taken: 1.337 seconds
>> hive> insert
in the classpath and keep the one which is
shipped with Hive 3.0 only.
Regards
Tanvi Thacker
On Sat, Oct 27, 2018 at 5:50 AM ZongtianHou wrote:
> Hi, everyone,
> I have installed hive3.0 and tried it. I created a table , but can’t
> insert a record into it. Below is the operation
Hi, everyone,
I have installed hive3.0 and tried it. I created a table , but can’t insert a
record into it. Below is the operation and error info, any hint will be very
appreciated!
hive> create table ii(a int);
OK
Time taken: 1.337 seconds
hive> insert into ii values (2);
Qu
hu, Oct 18, 2018 at 1:13 PM AgriNut solutions <
> agrinutsol2...@gmail.com> wrote:
>
>> Hi Hive experts,
>>
>> I am having a 1 Master node, 3 corenodes and autoscaled task nodes from
>> min 1 to max 20 nodes EMR cluster.
>>
>> Hive table's da
m
> min 1 to max 20 nodes EMR cluster.
>
> Hive table's data is 3.5Gb with 1.3e6 rows and 28 columns. And we can't
> run any query with it, as it fails due to memory error:
>
> Intially got below error:
> ```
> Application application_153843
Hi Hive experts,
I am having a 1 Master node, 3 corenodes and autoscaled task nodes from min
1 to max 20 nodes EMR cluster.
Hive table's data is 3.5Gb with 1.3e6 rows and 28 columns. And we can't run
any query with it, as it fails due to memory error:
Intially got below error:
```
A
Looking closer this looks like something DataGrip is breaking not Hive.
Thanks
Shawn
From: Shawn Weeks
Sent: Thursday, October 18, 2018 8:00 AM
To: user@hive.apache.org
Subject: Hive 1.2.1 - Error getting functions
I'm working on a small project to get embedded Hive instances running in D
I'm working on a small project to get embedded Hive instances running in Docker
for testing Hive deployments. I'm getting the following error after DataGrip
connects and I'm trying to figure out if I'm missing a hive-site config as
currently I'm using all defaults
"TransactionType":"TransactionType",
>"SortCode":"SortCode",
>"AccountNumber":"AccountNumber",
>"TransactionDescription":"TransactionDescription",
>"DebitAmount":"DebitAmo
unt",
"Balance":"Balance"
}'
)
TBLPROPERTIES ('mongo.uri'='mongodb://account_user_RO:mongodb@rhes75
:60100/accounts.ll_18740868_mongo')
;
In debug mode it throws this error
CREATE EXTERNAL TABLE ll_18740868_mongo (
TransactionDate
'python add.py' as (add string);
But the task ends with errors:
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: Hive Runtime Error while closing
operators
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:207)
at org.apach
a or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> O
y for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
&
be liable for any monetary damages arising from such
> loss, damage or destruction.
>
>
>
>> On Tue, 3 Jul 2018 at 17:48, Mich Talebzadeh
>> wrote:
>> This is hive 3 on Hadoop 3.1
>>
>> I am getting this error in a loop
>>
>> 2018-07-03 17:43
m
such loss, damage or destruction.
On Tue, 3 Jul 2018 at 17:48, Mich Talebzadeh
wrote:
> This is hive 3 on Hadoop 3.1
>
> I am getting this error in a loop
>
> 2018-07-03 17:43:44,929 INFO [main] SessionState: Hive Session ID =
> 5f38c8a3-f269-42e0-99d8-9ddff676f009
> 201
This is hive 3 on Hadoop 3.1
I am getting this error in a loop
2018-07-03 17:43:44,929 INFO [main] SessionState: Hive Session ID =
5f38c8a3-f269-42e0-99d8-9ddff676f009
2018-07-03 17:43:44,929 INFO [main] server.HiveServer2: Shutting down
HiveServer2
2018-07-03 17:43:44,929 INFO [main
3.0.3 I had the following
0: jdbc:hive2://rhes75:10099/default> select count(1) from sales;
Error: Error while processing statement: FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. ORC split
generation failed with exception: java.lang.NoS
Thanks
I assume two things
Create these ORC tables from scratch and populate them. These are older
tables from 2.7
Do I need to upgrade to Hadoop 3.1 as suggested as well? Or I can keep the
current 3.0.3 of Hadoop and just redo these ORC tables again
Sounds like I need to upgrade Hadoop to 3.1?
> This is Hadoop 3.0.3
> java.lang.NoSuchMethodError:
> org.apache.hadoop.fs.FileStatus.compareTo(Lorg/apache/hadoop/fs/FileStatus;)I
> (state=08S01,code=1)
> Something is missing here! Is this specific to ORC tables?
No, it is a Hadoop BUG.
https://issues.apache.org/jira/browse/HADOOP-1468
Fi
. Can you try with hadoop-3.1?
>
> Thanks
> Prasanth
>
>
>
> On Mon, Jun 25, 2018 at 9:55 AM -0700, "Mich Talebzadeh" <
> mich.talebza...@gmail.com> wrote:
>
> Hive version 3
>>
>> An ORC partitioned table
>>
>> 0: jdbc:hive2://r
e
0: jdbc:hive2://rhes75:10099/default> select count(1) from sales;
Error: Error while processing statement: FAILED: Execution Error, return code 1
from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. ORC split generation failed
with exception: java.lang.NoSuchMethodError:
org.apache.hadoop.fs.FileSt
Hive version 3
An ORC partitioned table
0: jdbc:hive2://rhes75:10099/default> select count(1) from sales;
Error: Error while processing statement: FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. ORC split
generation failed with except
.DbTxnManager;
set hive.compactor.initiator.on=true;
set hive.compactor.worker.threads=20;
UPDATE t set object_name = 'Mich' WHERE object_id = 594688;
And this is the error I get at the end
Error: Error while processing statement: FAILED: Execution Error, return
code -101 from org.apach
The problem is by default Hive is on mr although you get a warning that is
best to use spark or tez.
That is fair enough but Hive 3 expects Tez to be there. Otherwise the
start-up script throws an error and waits a minute to retry the connection
to the metastore!
2018-06-14 14:48:16,989 INFO
ble for any monetary damages arising from
such loss, damage or destruction.
On 13 June 2018 at 23:49, Mich Talebzadeh wrote:
>
> *Hadoop 3.0.3Hive (version 3.0.0)*
>
> Running a simple query
>
> select count(1) from sales;
>
> I get the following error in container
>
*Hadoop 3.0.3Hive (version 3.0.0)*
Running a simple query
select count(1) from sales;
I get the following error in container
Error: Could not find or load main class
org.apache.hadoop.mapreduce.v2.app.MRAppMaster
The container file launch_container.sh has the following entry
exec /bin/bash
ch for your reply, I have changed the port number and set
> the thrift.bind.host to localhost. But I still get the error, do you have
> some ideas about this?
>
> beeline> !connect jdbc:hive2://localhost:1 <> anonymous anonymous
> Connecting to jdbc:hive2://localhost:1000
et the error, do you have
> some ideas about this?
>
> beeline> !connect jdbc:hive2://localhost:1 anonymous anonymous
> Connecting to jdbc:hive2://localhost:1
> /04/24 16:13:59 [main]: WARN jdbc.HiveConnection: Failed to connect to
> localhost:1
> Could not ope
Thank you very much for your reply, I have changed the port number and set the
thrift.bind.host to localhost. But I still get the error, do you have some
ideas about this?
beeline> !connect jdbc:hive2://localhost:1 anonymous anonymous
Connecting to jdbc:hive2://localhost:1
/04/24 16
mmand:
> >!connect jdbc:hive2://localhost:10002/default
>
> But get the following error
>
> WARN jdbc.HiveConnection: Failed to connect to localhost:10002
> Unknown HS2 problem when communicating with Thrift server.
> Error: Could not open client transport with JDBC Ur
Hi,
I have started hiveserver2 and try to connect it with beeline using the
following command:
>!connect jdbc:hive2://localhost:10002/default
But get the following error
WARN jdbc.HiveConnection: Failed to connect to localhost:10002
Unknown HS2 problem when communicating with Thrift ser
egunda-feira, 20 de novembro de 2017 22:37
> *Para:* user@hive.apache.org
> *Assunto:* Error "Unable to instantiate org.apache.hadoop.hive.ql.
> metadata.SessionHiveMetaStoreClient"
>
>
>
> Hi all,
>
>
> Hadoop verrsion...: 2.6.0 (Oracle Linux 7)
>
> Hive v
]
Enviada em: segunda-feira, 20 de novembro de 2017 22:37
Para: user@hive.apache.org
Assunto: Error "Unable to instantiate
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient"
Hi all,
Hadoop verrsion...: 2.6.0 (Oracle Linux 7)
Hive version.: 2.3.2
I am quite a new
1 - 100 of 1437 matches
Mail list logo