---
shashwat shriparv wants to stay in better touch using some of Google's
coolest new
products.
If you already have Gmail or Google Talk, visit:
http://mail.google.com/mail/b-228469e911-d2ff778f81-WxMPnDDjf5iGTttf2REIqsmDQOI
You
Also, in EMR the default file system for reading regular files is s3 rather
than s3n (the latter is a block file system requiring its own bucket or
something like that). Basically, s3 and s3n are switched vs the Apache
implementation.
Another potential issue is that Hive (at least the EMR version)
Thanks Bejoy. Appreciate the insight.
Do you know of altering the number of buckets once a table has been set up?
Thanks,
Ranjith
From: Bejoy Ks [mailto:bejoy...@yahoo.com]
Sent: Thursday, December 15, 2011 06:13 AM
To: user@hive.apache.org ; hive dev list
Subject: Re: bucketing in hive
Hi Ra
Hi,
org.apache.hadoop.hive.service.ThriftHive.getClusterStatus()
This API will return the cluster status pls check this API
Hope it helps,
Chinna Rao Lalam
From: Gabor Makrai [makrai.l...@gmail.com]
Sent: Friday, December 16, 2011 12:34 PM
To: user@hive
Hi,
Is there any answer for this question?
Thanks,
Gabor
On Fri, Dec 2, 2011 at 12:39 PM, Shantian Purkad
wrote:
> Hi,
>
> We want to get the job tracker Ids back in the code for logging purpose
> for the queries we fire using Hive Thrift client.
>
> We also want to get some stats that Hive dis
Hi Ranjan,
A couple of ideas come to mind:
1) Do an explain (or explain extended) on the query to find out where exactly
Hive is trying to read/write to the file it's complaining about.
2) Look at your job conf file. There is a hyperlink to it from your Job Tracker
web page. See if there is a c
hey all,
pls help me. during these days ,I found a serious issue with hive .It's an
exception thrown from hiveserver:
java.lang.StackOverflowError
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:242)
Hi,
I'm experiencing the following:
I've a file on s3 -- s3n://my.bucket/hive/ranjan_test. It's got fields
(separated by \001) and records (separated by \n).
I want it to be accessible on hive, the ddl is:
CREATE EXTERNAL TABLE IF NOT EXISTS ranjan_test (
ip_address string,
num_counted int
)
Hey all,
Marek Sapota has put together a doc on the new scripts for spreading Hive unit
test execution across a cluster:
https://cwiki.apache.org/confluence/display/Hive/Unit+Test+Parallel+Execution
Whether you are a committer or someone contributing patches, if you are
currently frustrated by
Hi Bejoy,
Thanks a lot for the valuable info and the link..I was just going
through the HbaseIntegration document..:)
Regards,
Mohammad Tariq
On Fri, Dec 16, 2011 at 1:11 AM, Bejoy Ks wrote:
> Hi Tariq
> You need to issue a ddl where you specify to hive how the row key and
> colum
Hi Tariq
You need to issue a ddl where you specify to hive how the row key and
column families in hbase are mapped to various columns in hive. You don't need
to do any data transfer from Hbase to hive that time. The DDL makes the hive
table point to the corresponding hbase table and when yo
Hello all,
Is there any way to fetch the data from a table which is already
present in Hbase using Hive directly??Or do I need to create the
corresponding table in the Hive first??
Regards,
Mohammad Tariq
Hi, Devs
When I ran Hive UT with the candidate build of Hive-0.8.0, I found that
TestEmbeddedHiveMetaStore and TestRemoteHiveMetaStore always FAILED with
ROOT account while PASS with NON-ROOT account.
I took a look at the source code of TestHiveMetaStore, and found that
fs.mkdirs(
Hi Ranjith
I'm not aware of any Dynamic Bucketing in hive where as there is
definitely Dynamic Partitions available. Your partitions/sub partitions would
be generated on the fly/dynamically based on the value of a particular column
.The records with same values for that column would go into
14 matches
Mail list logo