ava:696)
at
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.run(OrcInputFormat.java:824)
... 3 more
On Sun, Oct 5, 2014 at 10:03 AM, Echo Li wrote:
> Thanks guys, these are great info.
>
> - it works fine when use "set hive.execution.engine=mr"
>
> - t
Thanks guys, these are great info.
- it works fine when use "set hive.execution.engine=mr"
- there is nothing else printed after the exception; and no application
logs either
is this a bug?
On Fri, Oct 3, 2014 at 6:39 PM, Gopal V wrote:
> On 10/3/14, 5:20 PM, Echo Li wrote:
&g
java.io.IOException:java.lang.RuntimeException:
serious problem*
The table is ORC format, in google cloud storage.
On Fri, Oct 3, 2014 at 4:54 PM, Vikram Dixit wrote:
> Hi,
>
> Can you share the query and the application logs with us? Which
> version of hive is this?
>
> Thanks
> Vikram.
>
> On Fri, Oct 3, 2014
Hi guys,
I run some simple hive query and got the error below:
Status: Running (application id: application_1412033199033_6623)
Map 1: -/-Reducer 2: 0/1
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1412033199033_6623_1_01,
diagnostics=[Vertex Input: visit initializer faile
e instantiated in the
> default constructor or during the call of initialize()
>
> Please keep me informed if it works or not,
>
> Regards,
>
> Furcy
>
>
> 2014-09-09 1:44 GMT+02:00 Echo Li :
>
>> I wrote a UDTF in hive 0.13, the function parse a column wh
I wrote a UDTF in hive 0.13, the function parse a column which is json
string and return a table. The function compiles successfully by adding
hive-exec-0.13.0.2.1.2.1-471.jar to classpath, however when the jar is
added to hive and a function created using the jar then I try to run a
query using th
drop table in Hive metadata?
>> >
>> > Hi Echo,
>> >
>> > I dont think there is any to prevent this. I had the same concern in
>> hbase, but found out that it is assumed that user using the system are very
>> much aware of it. I am into hive from last 3 months, was
Good Friday!
I was trying to apply certain level of security in our hive data warehouse,
by modifying access mode of directories and files on hdfs to 755 I think
it's good enough for a new user to remove data, however the user still can
drop the table definition in hive cli, seems the "revoke" doe
Hi,
I created index and try to rebuild by partitions, however it did not work.
Here is my script to create and rebuild index:
create index ix_name_cid on table a (customer_id) as
'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED
REBUILD;
alter index ix_name_cid on a pa
group by's
>
> sometimes if you have crossjoins it will be helpful as well
>
>
> On Tue, Mar 19, 2013 at 11:44 PM, Echo Li wrote:
>
>> Good day,
>>
>> I wonder how much hive index helps, I had a test before seems it take
>> long time to build index,
Hi guys,
I plan to bucket a table by "userid" as I'm going to do intense calculation
using "group by userid". there are about 110 million rows, with 7 million
unique userid, so my question is what is a good number of buckets for this
scenario, and how to determine number of buckets?
Any input is
11 matches
Mail list logo