Hi,
That's great. Thanks a lot.
On Wed, Aug 30, 2017 at 10:44 AM, Tathagata Das wrote:
> Yes, it can be! There is a sql function called current_timestamp() which
> is self-explanatory. So I believe you should be able to do something like
>
> import org.apache.spark.sql.functions._
>
> ds.withCol
Yes, it can be! There is a sql function called current_timestamp() which is
self-explanatory. So I believe you should be able to do something like
import org.apache.spark.sql.functions._
ds.withColumn("processingTime", current_timestamp())
.groupBy(window("processingTime", "1 minute"))
.count