Hi All,

I want to access a particular column of a DB table stored in a CSV format
and perform some aggregate queries over it. I wrote the following query in
scala as a first step.

*var add=(x:String)=>x.split("\\s+)(2).toInt*
*var result=List[Int]()*

*input.split("\\n").foreach(x=>result::=add(x)) *
*[Queries:]result.max/min/filter/sum...*

But is there an efficient way/in-built function to access a particular
column value or entire column in Spark ? Because built-in implementation
might be efficient !

Thanks.

Reply via email to