hi there
I have a problem with creating a hive table.
no matter what field delimiter I used, I always got a tab space in the head of
each line (a line is a record).
something like this:
\t f1 \001 f2 \001 f3 ...
where f1 , f2 , f3 denotes the field value and \001 is the field separator.
here is
Thank you Bejoy.
I will file a new Jira.
Good wishes,always !
Santosh
On Wed, Jan 9, 2013 at 2:41 AM, wrote:
> **
> Looks like there is a bug with mapjoin + view. Please check hive jira to
> see if there an issue open against this else file a new jira.
>
> From my understanding, When you enab
Hi Manish,
Currently the blocker for HiveServer2 in ASF is
https://issues.apache.org/jira/browse/HIVE-3785 Patch there outlines
changes HS2 requires in existing hive codebase. That patch is currently
under review. Once thats reviewed and checked-in than remaining of HS2
patch will made it in trunk
Hi,
I'm looking for HiveServer2 implementation in ASF.
I follow this link.
https://cwiki.apache.org/Hive/hiveserver2-thrift-api.html
JIRA: https://issues.apache.org/jira/browse/HIVE-2935
Is this same as what CDH4 has released HiveServer2 and its available under
ASF or this is not being implemente
Thank you Shreepadma, I don't see stack trace. Below is the full execution
log
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per
Hi Santosh,
The execution log will contain the stack trace of the exception that caused
the task to fail. It would help to look into the execution log and attach
it to the email.
Thanks.
Shreepadma
On Tue, Jan 8, 2013 at 7:40 AM, Santosh Achhra wrote:
> Hello Hive Users,
>
> After I execute be
Looks like there is a bug with mapjoin + view. Please check hive jira to see if
there an issue open against this else file a new jira.
From my understanding, When you enable map join, hive parser would create back
up jobs. These back up jobs are executed only if map join fails. In normal
cases
Hi Ibrahim
The hive hbase integration totally depends on the hbase table schema and not
the schema of the source table in mysql.
You need to provide the column family qualifier mapping in there.
Get the hbase table's schema from hbase shell.
suppose you have the schema as
Id
CF1.qualifier1
CF1
Hi Santhosh
As long as the smaller table size is in the range of a few MBs. It is a good
candidate for map join.
If the smaller table size is still more then you can take a look at bucketed
map joins.
Regards
Bejoy KS
Sent from remote device, Please excuse typos
-Original Message-
F
Thank you Dean,
One of our table is very small, it has only 16,000 rows and other big table
has 45 million plus records. Wont doing a loacl task help in this case ?
Good wishes,always !
Santosh
On Tue, Jan 8, 2013 at 11:59 PM, Dean Wampler <
dean.wamp...@thinkbiganalytics.com> wrote:
> more ag
That setting will make Hive more aggressive about trying to convert a join
to a local task, where it bypasses the job tracker. When you're
experimenting with queries on a small data set, it can make things much
faster, but won't be useful for large data sets where you need the cluster.
dean
On Tu
Is setting hive.auto.convert.join to true will help setting mapreduce local
task and conditional task ?
Good wishes,always !
Santosh
On Tue, Jan 8, 2013 at 4:04 PM, Santosh Achhra wrote:
> Hello,
>
> I was reading an article on web which tells about MapReduce local Task and
> use of hash table
Hello,
suppose I have the following table (orders) in MySQL:
*** 1. row ***
Field: id
Type: int(10) unsigned
Null: NO
Key: PRI
Default: NULL
Extra: auto_increment
*** 2. row ***
Field:
Hello,
I was reading an article on web which tells about MapReduce local Task and
use of hash table files and conditional tasks to improve performance of
hive queries.
Any idea how to implement this ? I am aware of Map joins but I am sure how
to implement Map reduce local tasks with hash tables.
14 matches
Mail list logo