Hi, I started my Spark Streaming journey from Structured Streaming using Spark 2.3, where I can easily do Spark SQL transformations on streaming data.
But, I want to know, how can I do columnar transformation (like, running aggregation or casting, et al) using the prior utility of DStreams? Is there a way? Do I have to use map on RDD and go about the complex transformative steps? Or can I convert a DStream into DF and do the job? Appreciations in advance! Thanks, Aakash.