Fwd: Container preempted by scheduler - Spark job error

2016-06-02 Thread Prabeesh K.
cityScheduler being used ? > > Thanks > > On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K. wrote: > >> Hi I am using the below command to run a spark job and I get an error >> like "Container preempted by scheduler" >> >> I am not sure if it's rela

Container preempted by scheduler - Spark job error

2016-06-02 Thread Prabeesh K.
Hi I am using the below command to run a spark job and I get an error like "Container preempted by scheduler" I am not sure if it's related to the wrong usage of Memory: nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \ --deploy-mode cluster \ --queue adhoc \ --driver-memor

Re: Spark + Jupyter (IPython Notebook)

2015-08-18 Thread Prabeesh K.
Refer this post http://blog.prabeeshk.com/blog/2015/06/19/pyspark-notebook-with-docker/ Spark + Jupyter + Docker On 18 August 2015 at 21:29, Jerry Lam wrote: > Hi Guru, > > Thanks! Great to hear that someone tried it in production. How do you like > it so far? > > Best Regards, > > Jerry > > >

Re: Packaging Java + Python library

2015-04-13 Thread prabeesh k
Refer this post http://blog.prabeeshk.com/blog/2015/04/07/self-contained-pyspark-application/ On 13 April 2015 at 17:41, Punya Biswal wrote: > Dear Spark users, > > My team is working on a small library that builds on PySpark and is > organized like PySpark as well -- it has a JVM component (tha

Re: How to learn Spark ?

2015-04-02 Thread prabeesh k
You can also refer this blog http://blog.prabeeshk.com/blog/archives/ On 2 April 2015 at 12:19, Star Guo wrote: > Hi, all > > > > I am new to here. Could you give me some suggestion to learn Spark ? > Thanks. > > > > Best Regards, > > Star Guo >

Re: Beginner in Spark

2015-02-10 Thread prabeesh k
Refer this blog for step by step installation of Spark on Ubuntu On 7 February 2015 at 03:12, Matei Zaharia wrote: > You don't need HDFS or virtual machines to run Spark. You can just > download it, unzip it an

Re: Spark installation

2015-02-10 Thread prabeesh k
Refer this blog for step by step installation On 11 February 2015 at 03:42, Mohit Singh wrote: > For local machine, I dont think there is any to install.. Just unzip and > go to $SPARK_DIR/bin/spark-shell and t

Re: Need a partner

2015-02-10 Thread prabeesh k
Also you can refer this course in edx: Introduction to Big Data with Apache Spark On 11 February 201

Re: Kestrel and Spark Stream

2014-11-18 Thread prabeesh k
You can refer the following link https://github.com/prabeesh/Spark-Kestrel On Tue, Nov 18, 2014 at 3:51 PM, Akhil Das wrote: > You can implement a custom receiver > to > connect to Kestrel and use it. I think someone have alre

Re: Unable to run a Standalone job

2014-06-05 Thread prabeesh k
try sbt clean command before build the app. or delete .ivy2 ans .sbt folders(not a good methode). Then try to rebuild the project. On Thu, Jun 5, 2014 at 11:45 AM, Sean Owen wrote: > I think this is SPARK-1949 again: https://github.com/apache/spark/pull/906 > I think this change fixed this is

Re: Can't seem to link "external/twitter" classes from my own app

2014-06-05 Thread prabeesh k
Hi Jeremy , if you are using *addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.4") * in "project/plugin.sbt" You also need to edit "project / project / build.scala" with same sbt version(0.11.4). like import sbt._ object Plugins extends Build { lazy val root = Project("root", file(".")

Re: Re: mismatched hdfs protocol

2014-06-05 Thread prabeesh k
you! > > i am developping a java project in Eclipse IDE on Windows > in which spark 1.0.0 libraries are imported > and now i want to open HDFS files as input > the hadoop version of HDFS is 2.4.0 > > 2014-06-05 > -- > bluejoe2008 > > *From:

Re: mismatched hdfs protocol

2014-06-04 Thread prabeesh k
For building Spark for particular version of Hadoop Refer http://spark.apache.org/docs/latest/hadoop-third-party-distributions.html On Thu, Jun 5, 2014 at 8:14 AM, Koert Kuipers wrote: > you have to build spark against the version of hadoop your are using > > > On Wed, Jun 4, 2014 at 10:25 PM,

Unable to execute saveAsTextFile on multi node mesos

2014-05-31 Thread prabeesh k
Hi, scenario : Read data from HDFS and apply hive query on it and the result is written back to HDFS. Scheme creation, Querying and saveAsTextFile are working fine with following mode - local mode - mesos cluster with single node - spark cluster with multi node Schema creation and q

Re: Announcing Spark 1.0.0

2014-05-30 Thread prabeesh k
I forgot to hard refresh. thanks On Fri, May 30, 2014 at 4:18 PM, Patrick Wendell wrote: > It is updated - try holding "Shift + refresh" in your browser, you are > probably caching the page. > > On Fri, May 30, 2014 at 3:46 AM, prabeesh k wrote: > > Please update

Re: Announcing Spark 1.0.0

2014-05-30 Thread prabeesh k
Please update the http://spark.apache.org/docs/latest/ link On Fri, May 30, 2014 at 4:03 PM, Margusja wrote: > Is it possible to download pre build package? > http://mirror.symnds.com/software/Apache/incubator/ > spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz - gives me 404 > > Best regards, Ma

java.lang.OutOfMemoryError while running Shark on Mesos

2014-05-22 Thread prabeesh k
Hi, I am trying to apply inner join in shark using 64MB and 27MB files. I am able to run the following queris on Mesos - "SELECT * FROM geoLocation1 " - """ SELECT * FROM geoLocation1 WHERE country = '"US"' """ But while trying inner join as "SELECT * FROM geoLocation1 g1 INNER

Re: Better option to use Querying in Spark

2014-05-05 Thread prabeesh k
rustagi> > > > > On Tue, May 6, 2014 at 11:22 AM, prabeesh k wrote: > >> Hi, >> >> I have seen three different ways to query data from Spark >> >>1. Default SQL support( >> >> https://github.com/apache/spark/blob/master/examples/src

Better option to use Querying in Spark

2014-05-05 Thread prabeesh k
Hi, I have seen three different ways to query data from Spark 1. Default SQL support( https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/sql/examples/HiveFromSpark.scala ) 2. Shark 3. Blink DB I would like know which one is more efficient Regard

Re: Compile SimpleApp.scala encountered error, please can any one help?

2014-04-11 Thread prabeesh k
ensure the only one SimpleApp object in your project, also check is there any copy of SimpleApp.scala. Normally the file SimpleApp.scala in src/main/scala or in the project root folder. On Sat, Apr 12, 2014 at 11:07 AM, jni2000 wrote: > Hi > > I am a new Spark user and try to test run it from

Re: Spark packaging

2014-04-09 Thread prabeesh k
Please refer http://prabstechblog.blogspot.in/2014/04/creating-single-jar-for-spark-project.html Regards, prabeesh On Wed, Apr 9, 2014 at 1:04 PM, Pradeep baji wrote: > Hi all, > > I am new to spark and trying to learn it. Is there any document which > describes how spark is packaged. ( like de

[BLOG] For Beginners

2014-04-07 Thread prabeesh k
Hi all, Here I am sharing a blog for beginners, about creating spark streaming stand alone application and bundle the app as single runnable jar. Take a look and drop your comments in blog page. http://prabstechblog.blogspot.in/2014/04/a-standalone-spark-application-in-scala.html http://prabstec