nd it seems to work fine. In both cases where PPD enabled and disabled I
> am getting 3 as the result.
>
> - Prasanth
>
>
> On Sun, Jan 4, 2015 at 3:04 PM, wzc wrote:
>
>> Recently we find a bug with orc ppd, here is the testcase:
>>
>> use test;
>> creat
@Prasanth would you help me look into this problem?
Thanks.
On Mon Jan 05 2015 at 上午12:03:42 wzc wrote:
> Recently we find a bug with orc ppd, here is the testcase:
>
> use test;
> create table if not exists test_orc_src (a int, b int, c int)
> stored as orc;
> create t
Recently we find a bug with orc ppd, here is the testcase:
use test;
create table if not exists test_orc_src (a int, b int, c int)
stored as orc;
create table if not exists test_orc_src2 (a int, b int, d int)
stored as orc;
insert overwrite table test_orc_src select 1,2,3 from dim.city
limit 1;
i
We also encounter this problem. It seems that it happens in
various situations. Currently we patch
https://issues.apache.org/jira/browse/HIVE-7167 which has some retries
logics in init HiveMetaStoreClient and hope to reduce the occurrence of
this problem.
Thanks.
2014-07-02 22:40 GMT+08:00 hado
We just upgrade our hive from hive 0.11 to hive 0.13, we find that
running "select * from src1 user limit 5;" in hive 0.13 report the
following errors:
> ParseException line 1:14 cannot recognize input near 'src1' 'user' 'limit'
> in from source
I don't know why "user" would be a preserve keyw
Hi,
We also encounter this in hive 0.13 , we need to enable concurrency in
daily ETL workflows (to avoid sub etl start to read parent etl 's output
while it's still running).
We found that in hive 0.13 sometime when you open hive cli shell it would
output the msg "conflicting lock present for defa
hi all:
I'veI created a jira for this problem:
https://issues.apache.org/jira/browse/HIVE-7847 .
Thanks.
2014-08-22 1:59 GMT+08:00 wzc :
> hi all:
>
> I test the above example with hive trunk and it still fail. After some
> debugging, finally I find the cause of the proble
t;}
> }
If the fix is desirable, I may create a tck in hive jira and upload a patch
for it. Please correct me if I'm wrong.
Thanks.
2014-07-31 4:56 GMT+08:00 wzc :
>
> hi,
> Currently, if we change orc format hive table using "alter table
or
hi,
Currently, if we change orc format hive table using "alter table orc_table
change c1 c1 bigint ", it will throw exception from SerDe
("org.apache.hadoop.io.IntWritable
cannot be cast to org.apache.hadoop.io.LongWritable" ) in query time, this
is different behavior from hive (using other file
Recently we are changing some data warehouse tables from textfile to orc
format. Some of our hive SQL which read these orc tables failed at reduce
stage. Reducers failed while copying Map outputs with following exception:
Caused by: java.lang.OutOfMemoryError: Java heap space
> at
> org.ap
The bug remains even if I apply the patch in HIVE-4206 :( The explain
result hasn't change.
2013/3/28 Navis류승우
> It's a bug (https://issues.apache.org/jira/browse/HIVE-4206).
>
> Thanks for reporting it.
>
> 2013/3/24 wzc :
> > Recently we tried to upgrade our h
Recently we tried to upgrade our hive from 0.9 to 0.10, but found some of
our hive queries almost 7 times slow. One of such query consists multiple
table outer join on the same key. By looking into the query, we found the
query plans generate by hive 0.9 and hive 0.10 are different. Here is the
' select /* myid bla bla*/ x,y,z '
I can't run above command using cli nor "hive -f", could you explain how to
add comment in hive query?
2013/2/8 Edward Capriolo
> That is a good way to do it. We do it with comment sometimes.
>
> select /* myid bla bla*/ x,y,z
>
> Edward
>
> On Thu, Feb 7, 201
-0.10 RC0 http://people.apache.org/~hashutosh/hive-0.10.0-rc0/ You
> can try hive from there and see if this fixes this connection leaking
> problem.
>
> Thanks,
> Ashutosh
>
>
> On Fri, Jan 4, 2013 at 10:51 PM, wzc wrote:
>
>> Hi all:
>> I am using hive 0.9 and zookee
hi all:
I have found the answer here
<http://osdir.com/ml/hive-user-hadoop-apache/2010-05/msg00038.html>
:
Adding the following lines before the import
solved the problem:
import sys
import os
sys.path.append(os.getcwd())
2012/5/14 wzc
> Hi all:
> I try to run simple transform sc
Hi all:
I try to run simple transform script in hive and my script is written in
python. But when I try to import other file in the script the task fails.
There may be some basic classes which are used by many transform scripts,
so I would like to know how to import other file in my transform scrip
LOCAL DIRECTORY '/mydir'
> SELECT …
>
> The output columns will be delimited with ^A (\001). If you have to have
> tab delimited format you can replace them like this:
>
> cat /mydir/* | tr "\001" "\t" >> /mynewdir/myfile.dat
>
> I hope this h
Hi all:
I am new to hive, and I try to run a query through hive CLI and load the
result into mysql.
I try to redirect the CLI output to a tmp file and load the tmp file into
mysql table. The problem here is that some columns of our query result may
contains special chars, such as tab(\t), new line(
18 matches
Mail list logo