I'd suggest loading the source in an IDE if you want to explore the
code base. It will let you answer this in one click.
Here it's Dataset, as a DataFrame is a Dataset[Row].

On Thu, Mar 28, 2019 at 9:21 AM ehsan shams <ehsan.shams.r...@gmail.com> wrote:
>
> Hi
>
> I would like to know where exactly(which class/function) spark sql will apply 
> the operators on dataset / dataframe rows.
> For example by applying the following filter or groupby which class is 
> responsible for? And will iterate over the rows to do its operation?
>
> Kind regards
> Ehsan Shams
>
> val df1 = sqlContext.read.format("csv").option("header", 
> "true").load("src/main/resources/Names.csv")
> val df11 = df1.filter("County='Hessen'")
> val df12 = df1.groupBy("County")

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to