One more notice
When I do:
[margusja@sandbox ~]$ hdfs dfs -ls /
I see in krb5kdc log:
Jan 09 21:36:53 sandbox.hortonworks.com krb5kdc[8565](info): TGS_REQ (6
etypes {18 17 16 23 1 3}) 10.0.2.15: ISSUE: authtime 1452375310, etypes
{rep=18 tkt=18 ses=18}, margu...@example.com for
nn/sandbox.hor
Hi Gopal - actually no., the table is not partitioned/bucketed.
Everyday the whole table gets cleaned up and populated with last 120 days'
data...
What are the other properties I can try to improve the performance of
reduce steps...?
Suresh V
http://www.justbirds.in
On Sat, Jan 9, 2016 at 8:52
Hi Mich
We have to use TEZ as the engine since the data volume is high and with MR
it takes several hours.
With TEZ it used to take about an hour max.
Thanks
Suresh.
On Sat, Jan 9, 2016 at 7:34 AM, Mich Talebzadeh wrote:
> Hi Suresh,
>
>
>
> I have the same issue when I use Hive on Spark.
>
>
Hi
I am trying to use beeline with hive + kerberos (Hortonworks sandbox 2.3)
The problem is that I can use hdfs but not beeline and I do not know
what is wrong.
Console output:
[margusja@sandbox ~]$ kdestroy
[margusja@sandbox ~]$ hdfs dfs -ls /user/
16/01/09 15:45:32 WARN ipc.Client: Exceptio
Hi,
> The job completes fine if we reduce the # of rows processed by reducing
>the # of days data being processed.
>
> It just gets stuck after all maps are completed. We checked the logs and
>it says the containers are released.
Looks like you're inserting into a bucketed & partitioned table an
Hi Suresh,
I have the same issue when I use Hive on Spark.
What normally works is Hive on MR. Have you tried:
set hive.execution.engine=mr;
Sounds like it times out for one reason or other!
From: Suresh V [mailto:verdi...@gmail.com]
Sent: 09 January 2016 11:35
To: user@hive.apa
Can you do two paths?
create temporary table tmp AS
SELECT zz.report_dt, vss.eff_dt, vss.disc_dt, … rest of columns
FROM rvsed11 zz
LEFT OUTER JOIN rvsed22 vss
ON zz.company_id = vss.company_id
AND zz.shares_ship_id = vss.shares_ship_id
;
select from #tmp
Where
report_dt >= eff_
Hi All,
I am having issue hive LEFT OUTER JOIN.
I had al table in sql-server. then used sqoop to migrate all tables on
hive.
This is the original query from sql-server which contains non-equi LEFT
OUTER JOIN. both table have *cartesian data*.
SELECT
vss.company_id,vss.shares_ship_id,vss.s
Dear all
We have a Hive query that 'insert overwrites' from one main hive table to
another table about 24million rows every day.
This query was working fine so long, but lately it has started to hang at
the reduce steps.
It just gets stuck after all maps are completed. We checked the logs and it
Hi All,
I am facing strange behaviour as explained below. I have tow hive table T1
and T2 , joined with LEFT OUTER JOIN ..I am getting strange value for two
columns t2c2t2c3 of table T2 after join.
See below complete detail :
*Table T1 :*
create table T1 ( t1c1 int , t1c2 int , t1c3 int )
10 matches
Mail list logo