I recommend to run it with your unit tests executed with your build tool.
There is no need to have it in the ide running in the background.
> On 3. Mar 2018, at 17:57, sujeet jog wrote:
>
> Is there a way to run Spark-JobServer in eclipse ?.. any pointers in this
Is there a way to run Spark-JobServer in eclipse ?.. any pointers in this
regard ?
A better forum would be
https://groups.google.com/forum/#!forum/spark-jobserver
or
https://gitter.im/spark-jobserver/spark-jobserver
Regards,
Noorul
Madabhattula Rajesh Kumar writes:
> Hi,
>
> I am getting below an exception when I start the job-server
>
> ./server_start.sh
Hi,
I am getting below an exception when I start the job-server
./server_start.sh: line 41: kill: (11482) - No such process
Please let me know how to resolve this error
Regards,
Rajesh
Hi,
I'm working wiht the latest version of Spark JobServer together with Spark
2.0.2. I'm able to do almost all my needs but there is only one noisy thing.
I have placed a hive-site.xml to specify a connection to my mysql db so I
can have the metastore_db on mysql, that's w
Hi
I'm going to deploy jobserver on my CentOS (spark is installed with cdh5.7).
I'm using oracle jdk1.8, sbt-0.13.13, spark-1.6.0 and jobserver-0.6.2.
When I run sbt command (after running sbt publish-local) I encountered the
bellow message :
[cloudera@quickstart spark-jobserver]$
Reza zade writes:
> Hi
>
> I have set up a cloudera cluster and work with spark. I want to install
> spark-jobserver on it. What should I do?
Maybe you should send this to spark-jobserver mailing list.
https://github.com/spark-jobserver/spark-jobserver#contact
Thanks and Re
Hi
I have set up a cloudera cluster and work with spark. I want to install
spark-jobserver on it. What should I do?
On 25 January 2016 at 21:09, Deenar Toraskar <
deenar.toras...@thinkreactive.co.uk> wrote:
> No I hadn't. This is useful, but in some cases we do want to share the
> same temporary tables between jobs so really wanted a getOrCreate
> equivalent on HIveContext.
>
> Deenar
>
>
>
> On 25 January 2016
Have you noticed the following method of HiveContext ?
* Returns a new HiveContext as new session, which will have separated
SQLConf, UDF/UDAF,
* temporary tables and SessionState, but sharing the same CacheManager,
IsolatedClientLoader
* and Hive client (both of execution and metadata) w
Hi
I am using a shared sparkContext for all of my Spark jobs. Some of the jobs
use HiveContext, but there isn't a getOrCreate method on HiveContext which
will allow reuse of an existing HiveContext. Such a method exists on
SQLContext only (def getOrCreate(sparkContext: SparkContext): SQLContext).
Hi all,
I have some questions about spark -jobserver.
I deployed a spark-jobserver in yarn-client mode using docker.
I’d like to use dynamic resource allocation option for yarn in spark-jobserver.
How can I add this option?
And when will it be support 1.5.x version ?
(https://hub.docker.com/r
I was able to fix the issues by providing right version of cassandra-all and
thrift libraries
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Cassandra-Connection-Issue-with-Spark-jobserver-tp22587p22664.html
Sent from the Apache Spark User List mailing
productId, saleCount) => {
> val outColFamKey = Map("prod_id" -> ByteBufferUtil.bytes(productId))
> val outKey: java.util.Map[String, ByteBuffer] = outColFamKey
> var outColFamVal = new ListBuffer[ByteBuffer]
> outColFamVal += ByteBufferUt
s(saleCount)
val outVal: java.util.List[ByteBuffer] = outColFamVal
(outKey, outVal)
}
}
casoutputCF.saveAsNewAPIHadoopFile(
KeySpace,
classOf[java.util.Map[String, ByteBuffer]],
classOf[java.util.List[ByteBuffer]],
classOf[CqlOutputForm
You shouldn't need to do anything special. Are you using a named context?
I'm not sure those work with SparkSqlJob.
By the way, there is a forum on Google groups for the Spark Job Server:
https://groups.google.com/forum/#!forum/spark-jobserver
On Thu, Apr 2, 2015 at 5:10 AM, Harika wr
Hi,
I am trying to Spark Jobserver(
https://github.com/spark-jobserver/spark-jobserver
<https://github.com/spark-jobserver/spark-jobserver> ) for running Spark
SQL jobs.
I was able to start the server but when I run my application(my Scala class
which extends SparkSqlJob), I am getti
Sorry for the long silence. We are able to
1. Pass parameters from Vaadin (Java Framework) to spark-jobserver using
HttpURLConnection POST method.
2. Receive filtered (based on passed parameters) RDD results from
spark-jobserver using HttpURLConnection GET method.
3. Finally, showing the results
Thanks Vasu. Let me get back to you once I am done with trials.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-pass-parameters-to-a-spark-jobserver-Scala-class-tp21671p21732.html
Sent from the Apache Spark User List mailing list archive at
Twitter.
Regards,
Vasu C
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-pass-parameters-to-a-spark-jobserver-Scala-class-tp21671p21727.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Thank you very much Vasu. Let me add some more points to my question. We are
developing a Java program for connection spark-jobserver to Vaadin (Java
framework). Following is the sample code I wrote for connecting both (the
code works fine):
/
URL url = null;
HttpURLConnection connection = null
-a-spark-jobserver-Scala-class-tp21671p21695.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
Hi Sasi,
To pass parameters to spark-jobserver use " curl -d "input.string = a b c
a b see" " and in Job server class use config.getString("input.string").
You can pass multiple parameters like starttime,endtime etc and use
config.getString("") to get
Thank you Abhishek. The code works.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-define-SparkContext-with-Cassandra-connection-for-spark-jobserver-tp21119p21184.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
-Cassandra-connection-for-spark-jobserver-tp21119p21162.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user
Hi all,
I'm able to submit spark jobs through spark-jobserver. But this allows to
use spark only in yarn-client mode. I want to use spark also in
yarn-cluster mode but jobserver does not allow it, like says in the README
file https://github.com/spark-jobserver/spark-jobserver.
Could you
Dear All,
For our requirement, we need to define a SparkContext with SparkConf which
has Cassandra connection details. And this SparkContext need to be shared
for subsequent runJobs and through out the application.
So, How to define SparkContext with Cassandra connection for
spark-jobserver
21089&i=0>> wrote:
>
>> Thank you Abhishek. That works.
>>
>> --
>> If you reply to this email, your message will be added to the
>> discussion below:
>>
>> http://apache-spark-user-list.1001560.n3.nabble.com/Removin
discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Removing-JARs-from-spark-jobserver-tp21081p21084.html
> To start a new topic under Apache Spark User List, email
> ml-node+s1001560n...@n3.nabble.com
> To unsubscribe from Apache Spark User List, click he
Thank you Abhishek. That works.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Removing-JARs-from-spark-jobserver-tp21081p21084.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
There is path /tmp/spark-jobserver/file where all the jar are kept by
default. probably deleting from there should work
On 11 Jan 2015 12:51, "Sasi [via Apache Spark User List]" <
ml-node+s1001560n21081...@n3.nabble.com> wrote:
> How to remove submitted JARs f
How to remove submitted JARs from spark-jobserver?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Removing-JARs-from-spark-jobserver-tp21081.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
We are able to resolve *SparkException: Job aborted due to stage failure: All
masters are unresponsive! Giving up* as well. Spark-jobserver working fine
now and need to experiment more.
Thank you guys.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Set
/scala-2.10/spark-jobserver-examples_2.10-1.0.0.jar
localhost:8090/jars/sparking* command to upload
as mentioned in https://github.com/fedragon/spark-jobserver-examples link.
We done some samples earlier for connecting Apache Cassandra to spark using
Scala language. Initially, we faced same
Thank you Pankaj. We are able to create the Uber JAR (very good to bind all
dependency JARs together) and run it on spark-jobserver. One step better
than what we are.
However, now facing *SparkException: Job aborted due to stage failure: All
masters are unresponsive! Giving up*. We may need to
joda-time-2.3.jar"
]
}
Now post the context to the job server:
radtech:spark-jobserver-example$ curl -d
src/main/resources/spark.context-settings.config -X POST
'localhost:8090/contexts/cassJob-context'
Then execute your job:
curl --data-binary
@target/scala-2.10/spark-jobse
its own as a regular spark app,
without using jobserver.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Set-EXTRA-JAR-environment-variable-for-spark-jobserver-tp20989p20998.html
Sent from the Apache Spar
"jobserver test demo")
.setMaster("local[4]")
.setJars(Seq("C:/spark-jobserver/lib/spark-cassandra-connector_2.10-1.1.0-alpha3.jar"))
Am I missing something?
Meanwhile, I will try for Pankaj's reply of using uber jar.
--
Or you can use:
sc.addJar("/path/to/your/datastax.jar")
Thanks
Best Regards
On Tue, Jan 6, 2015 at 5:53 PM, bchazalet
wrote:
> I don't know much about spark-jobserver, but you can set jars
> programatically
> using the method setJars on SparkConf. Looking at your cod
Skype
pankaj.narang
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Set-EXTRA-JAR-environment-variable-for-spark-jobserver-tp20989p20992.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
I don't know much about spark-jobserver, but you can set jars programatically
using the method setJars on SparkConf. Looking at your code it seems that
you're importing classes from com.datastax.spark.connector._ to load data
from cassandra, so you may need to add that datastax j
We are trying to use spark-jobserver for one of our requirement. We referred
*https://github.com/fedragon/spark-jobserver-examples* and modified little
to match our requirement as below -
/** ProductionRDDBuilder.scala ***/
package sparking
package jobserver
// Import required libraries
Thanks Akhil, that will help a lot !
It turned out that spark-jobserver does not work in "development mode" but
if you deploy a server it works (looks like the dependencies when running
jobserver from sbt are not right)
On Thu, Jan 1, 2015 at 5:22 AM, Akhil Das
wrote:
>
ndo O. wrote:
>
>> Hi all,
>> I'm investigating spark for a new project and I'm trying to use
>> spark-jobserver because... I need to reuse and share RDDs and from what I
>> read in the forum that's the "standard" :D
>>
>> Turns out tha
g to use
> spark-jobserver because... I need to reuse and share RDDs and from what I
> read in the forum that's the "standard" :D
>
> Turns out that spark-jobserver doesn't seem to work on yarn, or at least
> it does not on 1.1.1
>
> My config is spark 1.1.1 (mo
Hi all,
I'm investigating spark for a new project and I'm trying to use
spark-jobserver because... I need to reuse and share RDDs and from what I
read in the forum that's the "standard" :D
Turns out that spark-jobserver doesn't seem to work on yarn, or at least i
Thanks Abhishek. We are good know with an answer to try.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Need-help-for-Spark-JobServer-setup-on-Maven-for-Java-programming-tp20849p20906.html
Sent from the Apache Spark User List mailing list archive at
k-user-list.1001560.n3.nabble.com/Need-help-for-Spark-JobServer-setup-on-Maven-for-Java-programming-tp20849p20902.html
> To start a new topic under Apache Spark User List, email
> ml-node+s1001560n...@n3.nabble.com
> To unsubscribe from Apache Spark User List, click here
> <ht
/Need-help-for-Spark-JobServer-setup-on-Maven-for-Java-programming-tp20849p20902.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional
List]" <
ml-node+s1001560n20898...@n3.nabble.com> wrote:
>
> The reason being, we had Vaadin (Java Framework) application which
displays data from Spark RDD, which in turn gets data from Cassandra. As we
know, we need to use Maven for building Spark API in Java.
>
> We tested t
The reason being, we had Vaadin (Java Framework) application which displays
data from Spark RDD, which in turn gets data from Cassandra. As we know, we
need to use Maven for building Spark API in Java.
We tested the spark-jobserver using SBT and able to run it. However, for our
requirement, we
elaboration?
>
> Sasi
>
>
> If you reply to this email, your message will be added to the discussion
below:
>
http://apache-spark-user-list.1001560.n3.nabble.com/Need-help-for-Spark-JobServer-setup-on-Maven-for-Java-programming-tp20849p20896.html
> T
Does my question make sense or required some elaboration?
Sasi
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Need-help-for-Spark-JobServer-setup-on-Maven-for-Java-programming-tp20849p20896.html
Sent from the Apache Spark User List mailing list archive at
Dear All,
We are trying to share RDDs across different sessions of same Web
application (Java). We need to share single RDD between those sessions. As
we understand from some posts, it is possible through Spark-JobServer.
Is there any guidelines you can provide to setup Spark-JobServer for Maven
Hi,
I'm working on the problem of remotely submitting apps to the spark
master. I'm trying to use the spark-jobserver project
(https://github.com/ooyala/spark-jobserver) for that purpose.
For scala apps looks like things are working smoothly, but for java
apps, I have an issue with im
managing your Spark jobs and job history and status.
In order to make sure the project can continue to move forward
independently, new features developed and contributions merged, we are
moving the project to a new github organization. The new location is:
https://github.com/spark-jobserver/spark
I'm looking for something like the ooyala spark-jobserver (
https://github.com/ooyala/spark-jobserver) that basically manages a
SparkContext for use from a REST or web application environment, but for
python jobs instead of scala.
Has anyone written something like this? Looking for a proje
That's good to know. I will try it out.
Thanks Romain
On Friday, June 27, 2014, Romain Rigaux wrote:
> So far Spark Job Server does not work with Spark 1.0:
> https://github.com/ooyala/spark-jobserver
>
> So this works only with Spark 0.9 currently:
>
> http://gethue.com
So far Spark Job Server does not work with Spark 1.0:
https://github.com/ooyala/spark-jobserver
So this works only with Spark 0.9 currently:
http://gethue.com/get-started-with-spark-deploy-spark-server-and-compute-pi-from-your-web-browser/
Romain
Romain
On Tue, Jun 24, 2014 at 9:04 AM
app/
Now I am trying to add the Spark editor to Hue. AFAIK, this requires
git clone https://github.com/ooyala/spark-jobserver.git
cd spark-jobserver
sbt
re-start
This was successful after lot of struggle with the proxy settings. However,
is this the job Server itself? Will that mean the job Server
60 matches
Mail list logo