Hi
Currently CapacityScheduler does not have pre-emption. So basically when the
Job1 starts finishing and freeing up the Job2's tasks will start getting
scheduled. One way you can prevent that queue capacities are not elastic in
nature is by setting max task limits on queues. That way your job1
Hi all,
We are using capacity scheduler to schedule resources among different queues
for 1 user (hadoop) only. We have set the queues to have equal share of the
resources. However, when 1st task starts in the first queue and is consuming
all the resources, the 2nd task starts in the 2nd queue will
It should work starting with 0.7 (both client and server need to be 0.7).
As for the keys, see HIVE-1734.
-Original Message-
From: Sunderlin, Mark [mailto:mark.sunder...@teamaol.com]
Sent: Thursday, April 28, 2011 10:57 AM
To: 'user@hive.apache.org'
Subject: Selecting an entire map, not
A custom SerDe would be your best bet. We're using one to do exactly that.
Regards,
Rick
On Apr 28, 2011 11:29 AM, "Shantian Purkad"
wrote:
> Any suggestions?
>
>
>
>
> From: Shantian Purkad
> To: user@hive.apache.org
> Sent: Tue, April 26, 2011 11:05:46 PM
> Su
OK, feeling a bit dumb here . so I need the hive user group jolt to the head ...
Given a table like:
describe hive_map_test
col_namedata_type comment
log_record_type int
key_pairs map
ev_date string
and given that this works:
sel
Any suggestions?
From: Shantian Purkad
To: user@hive.apache.org
Sent: Tue, April 26, 2011 11:05:46 PM
Subject: Multi character delimiter for Hive Columns and Rows
Hello,
We have a situation where the data coming from source systems to hive may
contain the c
Thanks for your help
2011/4/28 Loren Siebert
> You have the file type as sequence file, but you are trying to load a GZip
> file. Won’t that only work if the table is defined as a text file?
>
I've think sequence = gzip file before, and now I realized it's not.
It's work when table is defined as
Seems I found the reason.
I'm try to upgrade my hive-0.5.0 to hive-0.7 today, and executed the sql in
upgrade directory, migrate conf to new hive. Then found that can't drop
table again.
When drop a table, I found there is a connection from hive try to do
something on "IDXS" table in postgress,