databases currently supported by Spark SQL JDBC

2015-07-09 Thread Niranda Perera
Hi, I'm planning to use Spark SQL JDBC datasource provider in various RDBMS databases. what are the databases currently supported by Spark JDBC relation provider? rgds -- Niranda @n1r44 https://pythagoreanscript.wordpress.com/

Re: what is the roadmap for Spark SQL dialect in the coming releases?

2015-01-22 Thread Niranda Perera
Hi, would like to know if there is an update on this? rgds On Mon, Jan 12, 2015 at 10:44 AM, Niranda Perera wrote: > Hi, > > I found out that SparkSQL supports only a relatively small subset of SQL > dialect currently. > > I would like to know the roadmap for the coming rele

Re: what is the roadmap for Spark SQL dialect in the coming releases?

2015-01-25 Thread Niranda Perera
y optimizer / execution engine instead of the > catalyst optimizer that is shipped with Spark. > > On Thu, Jan 22, 2015 at 3:12 AM, Niranda Perera > wrote: > >> Hi, >> >> would like to know if there is an update on this? >> >> rgds >> >> On Mo

What is the meaning to of 'STATE' in a worker/ an executor?

2015-03-29 Thread Niranda Perera
Hi, I have noticed in the Spark UI, workers and executors run on several states, ALIVE, LOADING, RUNNING, DEAD etc? What exactly are these states mean and what is the effect it has on working with those executor? ex: whether an executor can not be used in the loading state, etc cheers -- Niran

Creating a SchemaRDD from an existing API

2014-11-27 Thread Niranda Perera
[1] https://github.com/wso2-dev/carbon-analytics/tree/master/components/xanalytics -- *Niranda Perera* Software Engineer, WSO2 Inc. Mobile: +94-71-554-8430 Twitter: @n1r44 <https://twitter.com/N1R44>

Re: Creating a SchemaRDD from an existing API

2014-12-01 Thread Niranda Perera
nd there is an example library for reading Avro data > <https://github.com/databricks/spark-avro>. > > On Thu, Nov 27, 2014 at 10:31 PM, Niranda Perera wrote: > >> Hi, >> >> I am evaluating Spark for an analytic component where we do batch >> processing o

SparkSQL 1.2.0 sources API error

2015-01-02 Thread Niranda Perera
Hi all, I am evaluating the spark sources API released with Spark 1.2.0. But I'm getting a "ava.lang.NoSuchMethodError: org.jboss.netty.channel.socket.nio.NioWorkerPool.(Ljava/util/concurrent/Executor;I)V" error running the program. Error log: 15/01/03 10:41:30 ERROR ActorSystemImpl: Uncaught fat

Spark avro: Sample app fails to run in a spark standalone cluster

2015-01-05 Thread Niranda Perera
a.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) what might be the reason behind this? I'm using i

Guava 11 dependency issue in Spark 1.2.0

2015-01-06 Thread Niranda Perera
SuchMethodError: com.google.common.hash.HashFunction.hashInt" error occurs, which is understandable because hashInt is not available before Guava 12. So, I''m wondering why this occurs? Cheers -- Niranda Perera

Re: Guava 11 dependency issue in Spark 1.2.0

2015-01-06 Thread Niranda Perera
/HIVE-7387 as > well. > > Guava is now shaded in Spark as of 1.2.0 (and 1.1.x?), so I would think a > lot of these problems are solved. As we've seen though, this one is tricky. > > What's your Spark version? and what are you executing? what mode -- > standalone, YARN? Wha

Re: Guava 11 dependency issue in Spark 1.2.0

2015-01-06 Thread Niranda Perera
pp you send > to spark-submit. > > On Tue, Jan 6, 2015 at 10:15 AM, Niranda Perera > wrote: > >> Hi Sean, >> >> My mistake, Guava 11 dependency came from the hadoop-commons indeed. >> >> I'm running the following simple app in spark 1.2.0 standalone

example insert statement in Spark SQL

2015-01-07 Thread Niranda Perera
Hi, Are insert statements supported in Spark? if so, can you please give me an example? Rgds -- Niranda

what is the roadmap for Spark SQL dialect in the coming releases?

2015-01-11 Thread Niranda Perera
Hi, I found out that SparkSQL supports only a relatively small subset of SQL dialect currently. I would like to know the roadmap for the coming releases. And, are you focusing more on popularizing the 'Hive on Spark' SQL dialect or the Spark SQL dialect? Rgds -- Niranda