Looks like it's memory/ disk space issue with your database server used to
store Hive metadata. Can you check the disk usage of /tmp directory (data
directory of DB server).
Date: Wed, 6 Feb 2013 18:34:31 +0530
Subject: Getting Error while executing "show partitions TABLE_NAME"
From: chunky.gu..
The partitions info you see on 'show partitions' is fetched from Hive metadata
tables. The reason you are not seeing the path you are expecting might be
either 1) the path got deleted after the data load (do a simple select and
verify you see some data) or2) you have loaded the data from some ot
If properly done, "add jar " should work the same as passing the jar
with --auxpath. Can you run "list jars;" command from CLI or Hue and check if
you see the jar file.
From: java8...@hotmail.com
To: user@hive.apache.org
Subject: difference between add jar in hive session and hive --auxpath
Date
Since the UDF unix_timestamp() is a non-deterministic function, Hive query
planner doesn't run partition pruning based the 'dt' column value. If your
table is partitioned by 'dt' column, the query would end up scanning entire
table.
It is ideal to compute the required date value dynamically in a
Actually as the custom UDF "yesterday()" mentioned below is NOT marked with the
annotation @UDFType(deterministic = false), partition pruning should work in
practice. The PartitionPruner has a logic around this annotation to check if a
generic UDF is deterministic or not and would skip partitio
You can always do something like
INSERT OVERWRITE LOCAL DIRECTORY '/path/' SELECT [] FROM []
which saves the result set on to the given path.
Check Hive wiki for more
info.https://cwiki.apache.org/confluence/display/Hive/GettingStarted
> Date: Thu, 9 Aug 2012 17:42:17 -0400
> From:
I am working on copying existing Hive metadata (Hive 0.7.1 with MySQL 5.1) to a
new cluster environment (Hive 0.7.1 with MySQL 5.5). I copied over the
metastore tables and modified the data under SDS (sub-directories) table to
reflect the new data path. However I am getting MySQL integrity const
/en/upgrading-from-previous-series.html
>
> I would do a mysqldump and then re-insert the data for maximum comparability.
>
> On Mon, Nov 5, 2012 at 6:21 PM, Venkatesh Kavuluri
> wrote:
> > I am working on copying existing Hive metadata (Hive 0.7.1 with MySQL 5.1)
> > to a new clu
to see if you restored the data from the mysqldump onto a
separate schema/db on MySQL 5.1 server whether you still get the error or not.
Mark
On Mon, Nov 5, 2012 at 3:37 PM, Venkatesh Kavuluri
wrote:
Sorry for the confusion, the problem is not with the MySQL version upgrade - I
have ind