Hi All,
I have filed a feature enhancement bug on jira :
https://issues.apache.org/jira/browse/HIVE-4017
On Thu, Feb 7, 2013 at 10:01 PM, Edward Capriolo wrote:
> That is a good way to do it. We do it with comment sometimes.
>
> select /* myid bla bla*/ x,y,z
>
> Edward
>
> On Thu, Feb 7, 2013 a
Please send Hive-relevant questions to the Hive's user community lists
(user@hive.apache.org) instead of me directly. For more details read
http://hive.apache.org/mailing_lists.html#Users. I've added the list
in my response here, please carry forward discussions on the lists :-)
Neither is faster
Hi,
I looked through the syslogs and found the following exceptions.
Can anyone help me to figure out the point of error?
java.lang.RuntimeException: Error in configuring object
at
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
at
org.apache.hadoop.ut
Hi Mark,
Thanks for the response!
The UDAFPercentile.java have two terminate() methods since it is handling
two different input types by the two inner classes: PercentileLongEvaluator
and PercentileLongArrayEvaluator.
I am handling only a single input type of double from one table column to
the it
Hive-thrift is definitely best option till now. That said, I am wondering
if its possible to load megastore in local mode[1] to avoid dependency on
external service. Can I read the HIVE_CONF_DIR for javax.jdo.option.*
parameters and talk to sql server hosting hive metadata?
[1] https://cwiki.apach
so that i still query existing data successfully
On Tue, Feb 12, 2013 at 10:08 PM, Hamza Asad wrote:
> Actually i have data in HDFS under HIVE/Warehouse path. I ran short of
> disk space so i'm changing HDFS location (in new HDD partition). so please
> tell me how can i transfer my existing da
Actually i have data in HDFS under HIVE/Warehouse path. I ran short of disk
space so i'm changing HDFS location (in new HDD partition). so please tell
me how can i transfer my existing data safely to new location.
On Tue, Feb 12, 2013 at 9:54 PM, Nitin Pawar wrote:
> little more clarification
>
But then you're writing Java code!!! The Horror!!!
;^P
On Tue, Feb 12, 2013 at 10:53 AM, Edward Capriolo wrote:
> If you use hive-thrift/hive-service you can get the location of a
> table through the Table API (instead of Dean's horrid bash-isms)
>
>
> http://hive.apache.org/docs/r0.7.0/api/org/
hadoop fs -mv old_path new_path
If one new_path isn't in HDFS, use -get instead of -mv.
If you're moving Hive tables, you should then use ALTER TABLE to change the
metadata. Or, if the tables are external and have no partitions, you could
just drop them and recreate them with the new location.
O
little more clarification
is it same hdfs cluster ?
when you say migrating data from one location to another are you keeping
the hive table metadata same?
how much of capacity you have by disk wise?
On Tue, Feb 12, 2013 at 10:20 PM, Hamza Asad wrote:
> i want to change my HIVE/HDFS directory.
If you use hive-thrift/hive-service you can get the location of a
table through the Table API (instead of Dean's horrid bash-isms)
http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.Client.html#get_table(java.lang.String,
java.lang.String)
Table t = ..
Look like I am not doing good job in explaining my requirements.
My program is like a workflow engine which reads a script/configuration file
and only after reading a configuration file, it will know which metadata to
read from hive. E.g. Here is simplified version of script file
== Example In
You are correct on the what I am hoping to do, basically emit two records
for every row. What was interesting was when I just did the union in the
from, it didn't see to do a double table scan. I ended up doing:
INSERT OVERWRITE TABLE table_summary
select col1, unioned_col, count(distinct col4) f
I'll mention another bash hack that I use all the time:
hive -e 'some_command' | grep for_what_i_want |
sed_command_to_remove_just_i_dont_want
For example, the following command will print just the value of
hive.metastore.warehouse.dir, sending all the logging junk written to
stderr to /dev/null
In our case we needed to access hive meta data inside our oozie workflows
we were using Hcatalog as our hive metadata store and it was easy to access
table meta data directly via Hcatalog apis )
parag, will it be possible for you guys to change your metadata store ?
if not then you will need to
Thanks Mark for your reply.
My program is like a workflow management application and it runs on client
machine and not on hadoop cluster. I use 'hadoop jar' so that my application
has access to DFS and hadoop API. I would also like my application to have
access to Hive metadata the same way it
16 matches
Mail list logo