can you kindly share your code?

On Tue, May 19, 2015 at 8:04 PM, madhu phatak <phatak....@gmail.com> wrote:

> Hi,
> I  am trying run spark sql aggregation on a file with 26k columns. No of
> rows is very small. I am running into issue that spark is taking huge
> amount of time to parse the sql and create a logical plan. Even if i have
> just one row, it's taking more than 1 hour just to get pass the parsing.
> Any idea how to optimize in these kind of scenarios?
>
>
> Regards,
> Madhukara Phatak
> http://datamantra.io/
>



-- 
Best Regards,
Ayan Guha

Reply via email to