I ended up getting an error (Hive 0.7.1), but I would have thought
something like the following would work:
SELECT
user_id,
obj_key,
obj[obj_key] AS obj_item
FROM (
SELECT
"user1" user_id,
MAP("k1", "v1", "k2", "v2") obj
FROM calendar
LIMIT 1
) tmp
LATER
I ran into the same problem on the same MAC OS version.
This seems to be a JVM command line issue. it exceeds its limits and it's
platform independent. I know IntelliJ Idea handles this case.
On Wed, May 16, 2012 at 5:40 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> I inst
You can see if the classpath is being passed correctly to hadoop by putting in
an echo statement around line 150 of the hive cli script where it passes the
CLASSPATH variable to HADOOP_CLASSPATH.
# pass classpath to hadoop
export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${CLASSPATH}"
You could also
Hi all,
I have a table like this:
hive> desc mytable;
ts bigint
content map
hive> select * from mytable;
1354299050{"F1":"id-1"}
1354299040{"F1":"id-2","F2":"id-3"}
1354299030{"F1":"id-3","F2":"id-1","F3":"id-4"}
Does anyone know how to generate a table like this:
hive>
Hi David,
It seems like Hive is unable to find the skewed keys on HDFS.
Did you set *hive.skewjoin.key property? If so, to what value?*
Mark
On Fri, Nov 30, 2012 at 2:10 AM, David Morel wrote:
> Hi,
>
> I am trying to solve the "last reducer hangs because of GC because of
> truckloads of data" i
Hi Marc,
While what Dean said is true for different schemas in general, there is a
way to do it all in the same table if the schema changes to the TSV file
are just additions of new tab-separated columns at the very end of each row
and no existing columns are being deleted.
Let's say your TSV file
I just configured hive.metastore.warehouse.dir with the new path and it
works.
Fixed ..
2012/11/28 Nitin Pawar
> can you try providing a location 'path_to_file' and create the table again
>
>
> On Wed, Nov 28, 2012 at 2:09 PM, imen Megdiche wrote:
>
>> Hello,
>>
>> I got this error when tryin
Hello,
It is possible to write map and merge outputs in external files in order to
see them. Otherwhise how can i do to see the intermediate results.
Thank you
You'll have to define separate tables for the different schemas. You can
"unify" them in a query with the union feature. You should also remove the
header lines in the files, if you still have them, because Hive does not
ignore them, but treats them as "data".
dean
On Fri, Nov 30, 2012 at 2:59 AM
Hi,
I am trying to solve the "last reducer hangs because of GC because of
truckloads of data" issue that I have on some queries, by using SET
hive.optimize.skewjoin=true; Unfortunately, every time I try this, I
encounter an error of the form:
...
2012-11-30 10:42:39,181 Stage-10 map = 100%,
running 0.9.0 (you can see it from the classpath shown below);
steve@mithril:/shared/cdh4$ echo $HIVE_CONF_DIR
/shared/hive/conf
steve@mithril:/shared/cdh4$ ls -l $HIVE_CONF_DIR
total 152
-rw-r--r-- 1 steve steve 46053 2011-12-13 00:36 hive-default.xml.template
-rw-r--r-- 1 steve steve 1615 2012-
which version of hive do you use?
Could you try to add the following debug line in bin/hive before hive real
executes, and see the result?
*echo "CLASSPATH=$CLASSPATH"*
if [ "$TORUN" = "" ]; then
echo "Service $SERVICE not found"
echo "Available Services: $SERVICE_LIST"
exit 7
else
$
12 matches
Mail list logo