I am not using CDH I have installed apache hive. Can I use cloudera's driver
for the same
From: bharath vissapragada [mailto:bharathvissapragada1...@gmail.com]
Sent: Wednesday, January 30, 2013 12:18 PM
To: user@hive.apache.org
Subject: Re: hive odbc driver
Hi,
This looks like a CDH specific i
Hi,
This looks like a CDH specific issue. You'd get a better response in their
mailing lists.
Thanks,
On Wed, Jan 30, 2013 at 11:59 AM, Chhaya Vishwakarma <
chhaya.vishwaka...@lntinfotech.com> wrote:
> Hi,
>
> ** **
>
> I am using cloudera hive odbc driver made DSN but when I am trying t
Hi,
I am using cloudera hive odbc driver made DSN but when I am trying to connect
to data source thorugh visual studio its giving me error while testing
: [cid:image003.jpg@01CDFEE1.4F927F30]
Can anyone help me In this?
The contents of this e-mail and any att
+1 and +1 too (unbinding)
- Alex
On Jan 30, 2013, at 2:23 AM, Gunther Hagleitner
wrote:
> +1 and +1
>
> Thanks,
> Gunther.
>
>
> On Tue, Jan 29, 2013 at 5:18 PM, Edward Capriolo wrote:
>
>> Measure 1: +1
>> Measure 2: +1
>>
>> On Mon, Jan 28, 2013 at 2:47 PM, Carl Steinbach wrote:
>>
>>
Yes, if the only problem with my script solution is the variable number of
new days, you can just add all possibilities in advance, knowing that some
will be empty of content...
On Tue, Jan 29, 2013 at 7:49 PM, Sadananda Hegde wrote:
> Thanks, Edward. I can probably create all previous days part
Thanks, Edward. I can probably create all previous days partitions ahead
of time and then use Dean's logic to create new partitions on a daily
basis. I will probably end up having few empty partitions; need to make
sure it does not cause any confusions.
Thanks,
Sadu
On Tue, Jan 29, 2013 at 7:21
Thanks Dean,
But the M/R job creates the sub-folder based on the data and hence could
create the sub-folder for any of the previous days (and for multiple days
too). So I canot limit to today's folder only. I need to get the
year/month/day values from the HDFS location instead of from the system
d
+1 and +1
Thanks,
Gunther.
On Tue, Jan 29, 2013 at 5:18 PM, Edward Capriolo wrote:
> Measure 1: +1
> Measure 2: +1
>
> On Mon, Jan 28, 2013 at 2:47 PM, Carl Steinbach wrote:
>
> > I am calling a vote on the following two measures.
> >
> > Measure 1: Amend Hive Bylaws to Define Submodules and S
You can also just create all your partitions ahead of time. They will not
do any harm if empty. (unless you have an older version and hit this...
http://issues.apache.org/jira/browse/HIVE-1007 )
On Tue, Jan 29, 2013 at 8:17 PM, Mark Grover wrote:
> Hi Sadananda,
> Sorry to hear that.
>
> It got c
Measure 1: +1
Measure 2: +1
On Mon, Jan 28, 2013 at 2:47 PM, Carl Steinbach wrote:
> I am calling a vote on the following two measures.
>
> Measure 1: Amend Hive Bylaws to Define Submodules and Submodule Committers
>
> If this measure passes the Apache Hive Project Bylaws will be
> amended with
Hi Sadananda,
Sorry to hear that.
It got committed, don't worry about the "ABORTED". Here is the commit on
the trunk:
https://github.com/apache/hive/commit/523f47c3b6e7cb7b6b7b7801c66406e116af6dbc
However, there is no Apache Hive release with that patch in it.
You have two options:
1. Download t
Measure 1: +1
Measure 2: +1
On Tue, Jan 29, 2013 at 4:21 PM, Zhenxiao Luo wrote:
>
> Measure 1: +1
> Measure 2: +1
>
>
> Thanks,
> Zhenxiao
>
>
Thanks Mark,
Recover partition feature will satisfy my needs; but MSCK Repair Partition
< tablename> option is not working for me. It does not give any error; but
does not add any partitions either. It looks like it adds partitions only
when the sub-folder is empty; but not when the sub-folder ha
Measure 1: +1
Measure 2: +1
Thanks,
Zhenxiao
+1 to both.
- milind
---
Milind Bhandarkar
On 1/29/13 4:16 PM, "Prasad Mujumdar" wrote:
> Measure 1: +1 (non binding)
> Measure 2: +1 (non binding)
>
>thanks
>Prasad
>
>On Tue, Jan 29, 2013 at 2:57 PM, Shreepadma Venugopalan <
>shreepa...@cloudera.com> wrote:
>
>> Measure 1: +1
>> Measure
Measure 1: +1 (non binding)
Measure 2: +1 (non binding)
thanks
Prasad
On Tue, Jan 29, 2013 at 2:57 PM, Shreepadma Venugopalan <
shreepa...@cloudera.com> wrote:
> Measure 1: +1
> Measure 2: +1
>
> Thanks.
> Shreepadma
>
>
> On Tue, Jan 29, 2013 at 1:50 PM, Arvind Prabhakar
> wrote:
>
> > Mea
Measure 1: +1
Measure 2: +1
Thanks.
Shreepadma
On Tue, Jan 29, 2013 at 1:50 PM, Arvind Prabhakar wrote:
> Measure 1: +1
> Measure 2: +1
>
> Regards,
> Arvind Prabhakar
>
> On Mon, Jan 28, 2013 at 11:47 AM, Carl Steinbach wrote:
>
> > I am calling a vote on the following two measures.
> >
>
Measure 1: +1
Measure 2: +1
Regards,
Arvind Prabhakar
On Mon, Jan 28, 2013 at 11:47 AM, Carl Steinbach wrote:
> I am calling a vote on the following two measures.
>
> Measure 1: Amend Hive Bylaws to Define Submodules and Submodule Committers
>
> If this measure passes the Apache Hive Project
This is very easy to do with a shell script and you can have cron run it
nightly, say just after midnight so the year, month, and day calculation
below return the correct values:
#!/bin/bash
year=$(year +%Y)
month=$(year +%m)
day=$(year +%d)
hive -e "use mydb; ALTER TABLE foo ADD IF NOT EXISTS P
Thanks!
On Tue, Jan 29, 2013 at 5:34 AM, Navis류승우 wrote:
> HIVE-446 - Implement TRUNCATE : is on trunk (v0.11.0)
>
> HIVE-887 - Allow SELECT without a mapreduce job : It needs "set
> hive.fetch.task.conversion=more"
>
> 2013/1/29 Dean Wampler :
> > Oh, another one is
> >
> > https://issues.apac
I am running a transform script that parses through a bunch of binary data.
In 99% of the cases it runs, it runs fine, but on certain files I get a
failure (as seen below). Funny thing is, I can run a job with "only" the
problem source file, and it will work fine, but when as a group of files, I
g
HIVE-446 - Implement TRUNCATE : is on trunk (v0.11.0)
HIVE-887 - Allow SELECT without a mapreduce job : It needs "set
hive.fetch.task.conversion=more"
2013/1/29 Dean Wampler :
> Oh, another one is
>
> https://issues.apache.org/jira/browse/HIVE-446 - Implement TRUNCATE.
>
> The CLI doesn't recogn
22 matches
Mail list logo