Yup, got it. Thanks..
On Fri, Jul 27, 2012 at 5:10 PM, Viral Bajaria wrote:
> Hive has a whole lot of functionality that is built-in into it. You should
> look at length(string) function for what you want to achieve. Also I
> would suggest reading the hive language manual at
> https://cwiki.apa
Hive has a whole lot of functionality that is built-in into it. You should
look at length(string) function for what you want to achieve. Also I would
suggest reading the hive language manual at
https://cwiki.apache.org/confluence/display/Hive/LanguageManual. You can
also read
https://cwiki.apache.o
Thank you chuck
Sent from my iPhone
On Jul 27, 2012, at 7:13 PM, "Connell, Chuck" wrote:
> See https://ccp.cloudera.com/display/SUPPORT/Downloads
>
> Cloudera Manager is a small piece of software that downloads, installs and
> configures the Hadoop software suite on an entire cluster. It the
I have a column in Table1 named as -
*character *with String datatype.
I want to find all the records from the tables if total words in
*character*column is less than 32 characters.
Something like below.
select * from Table1 where *count_characters_in_character_column* < 32;
Is it possible t
See https://ccp.cloudera.com/display/SUPPORT/Downloads
Cloudera Manager is a small piece of software that downloads, installs and
configures the Hadoop software suite on an entire cluster. It then starts all
the Hadoop services and helps you manage the cluster going forward. For up to
50 compu
Thanks for the reply Bejoy. Backing up metastore.db stored in mysql is the only
to provide HA in hive?
Regards
Abhishek
Sent from my iPhone
On Jul 27, 2012, at 3:02 PM, Bejoy Ks wrote:
> Hi Abshiek
>
> Hadoop 2.0/CDH4 has HA on hdfs level so this HA would be available to all
> clients of h
Hi Edward,
Thanks for the reply, but Iam trying to install hive with hbase
Regards
Abhishek
Sent from my iPhone
On Jul 27, 2012, at 3:15 PM, Edward Capriolo wrote:
> Datastax backends the hive metastore into Cassandra making it highly
> available, an open sourced Brisk did this originally bu
Hi artem,
Thanks for the reply. Can you please elaborate bit more on this.
Regards
Abhishek
Sent from my iPhone
On Jul 27, 2012, at 4:47 PM, "Artem Ervits" wrote:
> Look at Apache Bigtop project.
>
>
>
>
> Artem Ervits
> Data Analyst
> New York Presbyterian Hospital
>
> - Original M
Hi chuck,
Thanks for the reply.Can you elaborate bit more on this auto install tool
Regards
Abhishek
Sent from my iPhone
On Jul 27, 2012, at 3:26 PM, "Connell, Chuck" wrote:
> I recommend the Cloudera CDH release of Hadoop and their auto-install tool.
> It saves a lot of config headaches.
Look at Apache Bigtop project.
Artem Ervits
Data Analyst
New York Presbyterian Hospital
- Original Message -
From: abhiTowson cal [mailto:abhishek.dod...@gmail.com]
Sent: Friday, July 27, 2012 12:30 PM
To: user@hive.apache.org
Subject: HIVE AND HBASE
hi all,
I am trying to install H
Thanks for the reference Bejoy.
V
On Fri, Jul 27, 2012 at 12:36 PM, Bejoy Ks wrote:
> Hi Vidhya
>
> This bug was reported and fixed in a later version of hive , Hive 0.8. An
> upgrade would set things in place.
>
> https://issues.apache.org/jira/browse/HIVE-2888
>
> Regards,
> Bejoy KS
>
> -
What about making your small files bigger, by ZIPping them together? Of course,
you have to think about this carefully, so MapReduce can efficiently retrieve
the files it needs without unzipping everything every time.
Chuck
From: richin.j...@nokia.com [mailto:richin.j...@nokia.com]
Sent: Frida
Hi Vidhya
This bug was reported and fixed in a later version of hive , Hive 0.8. An
upgrade would set things in place.
https://issues.apache.org/jira/browse/HIVE-2888
Regards,
Bejoy KS
From: Vidhya Venkataraman
To: user@hive.apache.org
Sent: Friday, July
I recommend the Cloudera CDH release of Hadoop and their auto-install tool. It
saves a lot of config headaches.
Chuck Connell
Nuance R&D Data Team
Burlington, MA
-Original Message-
From: abhiTowson cal [mailto:abhishek.dod...@gmail.com]
Sent: Friday, July 27, 2012 12:31 PM
To: user@hi
Datastax backends the hive metastore into Cassandra making it highly
available, an open sourced Brisk did this originally but the
technology was not actively maintained.
Edward
On Fri, Jul 27, 2012 at 3:02 PM, Bejoy Ks wrote:
> Hi Abshiek
>
> Hadoop 2.0/CDH4 has HA on hdfs level so this HA would
I think, I solved the below problem by using this below updated shell
script.
*#!/bin/bash*
*HIVE_OPTS="$HIVE_OPTS -hiveconf mapred.job.queue.name=hdmi-technology"*
*export HIVE_OPTS*
*HADOOP_HOME=/home/hadoop/latest*
*export HADOOP_HOME*
*hive -S -e 'SELECT count(*) from testingtable1' > attachme
Thanks Guys, I am changing my partition to hold a day worth of data and should
be good enough for Hive to operate on.
Thanks,
Richin
From: ext Bejoy Ks [mailto:bejoy...@yahoo.com]
Sent: Friday, July 27, 2012 3:06 PM
To: user@hive.apache.org
Subject: Re: Performance Issues in Hive with S3 and Par
Hi Richin
I agree with Edward on this. You have to design your partition in such a way
that each partition holds data that is atleast an hdfs block size.
Regards,
Bejoy KS
From: Edward Capriolo
To: user@hive.apache.org
Sent: Saturday, July 28, 2012 12:32 AM
Hi Abshiek
Hadoop 2.0/CDH4 has HA on hdfs level so this HA would be available to all
clients of hdfs like hive. AFAIK clients don't need any
particular configuration for this.
However if your question is does hive have HA, with hadoop HA hive also has HA
in terms of data. However in hive the m
Use a different partitioning scheme or consider using clustered /
bucketed tables.
On 7/27/12, richin.j...@nokia.com wrote:
> Igor,
>
> I did not see any major improvement in the performance even after setting
> "Hive.optimize.s3.query=true", although the same was suggested by AWS Team.
>
> My pr
That issue likely means there are more command line arguments then
your shell will tolerate. We have several tickets open to get hive 100
% compatible with windows. Look though the open jira issues and see if
any of the issues matches this problem. If not feel free to create
one. For now getting hi
Igor,
I did not see any major improvement in the performance even after setting
"Hive.optimize.s3.query=true", although the same was suggested by AWS Team.
My problem is I have too many small files - 3 level of partition, 6500+ files
and a single file is < 1 MB.
Now I know Hadoop and HDFS are n
I am trying to execute the below shell scripts using PLINK on MachineB from
MachineA(Windows Machine)
*#!/bin/bash*
*HIVE_OPTS="$HIVE_OPTS -hiveconf mapred.job.queue.name=hdmi-technology"*
*export HIVE_OPTS*
*hive -S -e 'SELECT count(*) from testingtable1' > attachment22.txt*
*
*
*Below is the wa
If you are using the latest release (0.9), you would need atleast
hbase-0.92 installed. If you are using the CDH stack, I would recommend
recompiling hive with CDH dependencies to avoid any surprises. You can find
more information about it here[1].
[1] https://cwiki.apache.org/Hive/hbaseintegratio
Hi
I am using Hive 0.7.x on my dev machine (yeah we will be upgrading soon
:) )
I used the statement indicated in the subject to create an external table:
*create external table ext_sample_v1 like sample_v1 location
'/hive/warehouse/sample_v1/';*
*
*
Since sample_v1 had partitions, I added so
>
> Hi all,
>
> I am trying to install CDH4, I have a doubt can hive manage HIGH AVALIABILITY?
>
> Regards
> abhishek
Yes, they should. Alternatively, just add them all to a directory and
point HIVE_AUX_JARS_PATH to it...
On Thu, Jul 26, 2012 at 12:28 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Hello,
>
> I know that a custom jar can be added to hive classpath via "--auxpath"
> command
Hi,I am using Hive 0.9 and try to execute a simple API: IMetaStoreClient metastoreClient = HiveUtil.getMetastoreClient(HiveUtil.createHiveConf(_metastoreUri, _hadoopProperties)); try { metastoreClient.tableExists("anyTableName"); } catch (Throwable e) {
Hi Manisha
In mapreduce if you want to change the name of output file you may need to
write your own OutputFormat.
Renaming files in hdfs is straight forward
hadoop fs -mv oldFileName newFileName
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: Manis
29 matches
Mail list logo