uot;channel_desc").agg(max("TotalSales").as("SALES")).orderBy("SALES").sort(desc("SALES")).take(5).foreach(println)
println ("\nFinished at"); HiveContext.sql("SELECT
FROM_unixtime(unix_timestamp(), 'dd/MM/ HH:mm:ss.ss')
").coll
Why not use Spark SQL?
Mohammed
Author: Big Data Analytics with
Spark<http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/>
From: Vikash Kumar [mailto:vikashsp...@gmail.com]
Sent: Wednesday, March 2, 2016 8:29 PM
To: user@spark.apache.org
Subject: convert SQL mu
I have to write or convert below SQL query into spark/scala. Anybody
can suggest how to implement this in Spark?
SELECT a.PERSON_ID as RETAINED_PERSON_ID,
a.PERSON_ID,
a.PERSONTYPE,
'y' as HOLDOUT,