actually it should be something like
getHandleIdentifier()=hfhkjhfjhkjfh-dsdsad-sdsd--dsada:
fetchResults()
On Wed, Aug 19, 2015 at 3:49 PM, Prem Yadav wrote:
> Hi Emil,
> for either of the queries, there will be no mapreduce job. the query
> engine understands that in both case, it ne
Hi Emil,
for either of the queries, there will be no mapreduce job. the query engine
understands that in both case, it need not do any computation and just
needs to fetch all the data from the files.
The fetch size should be honored in both cases. Hope you are using
hiveserver2.
You can try connec
Please check hive logs and post a few lines before the stacktrace. I have
seen this error sometimes due to permissions issue.
On Tue, Aug 11, 2015 at 9:32 AM, Matthias Kricke wrote:
> Hi,
>
>
>
> I’m using Hive 0.14.0.2.2.6.0-2800. When sending queries I get this stack
> trace:
>
> 2015-08-11 07
I believe there is support for primary key which is basically UNIQUE NOT
NULL constraint.
Ravi,
what is the error you are getting?
On Tue, Jun 2, 2015 at 2:20 PM, Edward Capriolo
wrote:
> Hive does not support primary key or other types of index constraints.
>
> On Tue, Jun 2, 2015 at 4:37 AM,
related?
https://issues.apache.org/jira/browse/HCATALOG-23
On Wed, Jul 23, 2014 at 4:49 PM, Brian Jeltema <
brian.jelt...@digitalenvoy.net> wrote:
> I have some Hive tables that are partitioned by an int field. When I tried
> to do a Sqoop import using Sqoops HCatalog
> support, it failed compla
Only today I had the exact same issue.
I used a script to load partitions but due to a mistake there were a lot of
unwanted partitions with special characters.
"Alter table drop partitions" showed successful but partitions where never
removed.
Finally, this is what I did
1) hive> show create ;
cop
may be you can post your partition structure and the query..Over
partitioning data is one of the reasons it happens.
On Fri, Jul 18, 2014 at 2:36 PM, diogo wrote:
> This is probably a simple question, but I'm noticing that for queries that
> run on 1+TB of data, it can take Hive up to 30 minute
I think you should be able to copy the data to a different location and
then drop the old db, and create a new one with the new location.
On Tue, Jul 1, 2014 at 1:54 AM, Jon Bender
wrote:
> Answered my own question, no there is not. The way to do is is to modify
> the DB_LOCATION_URI field in
bump
On Sun, Jun 29, 2014 at 12:48 PM, Prem Yadav wrote:
> Hi,
> we are using hive 0.10, the CDH version. CDH version is 4.4
>
> we have our table, partitioned with dates. A partition for everyday.
>
> We are trying to use ODBC for connecting to hive. Our select querie
Hi,
we are using hive 0.10, the CDH version. CDH version is 4.4
we have our table, partitioned with dates. A partition for everyday.
We are trying to use ODBC for connecting to hive. Our select queries work
just fine. However, when we try to use any aggregate methods- ex: count,
SUM etc: server
10 matches
Mail list logo