Re: Tools to manage workflows on Spark

2015-02-28 Thread Mayur Rustagi
Sorry not really. Spork is a way to migrate your existing pig scripts to Spark or write new pig jobs then can execute on spark. For orchestration you are better off using Oozie especially if you are using other execution engines/systems besides spark. Regards, Mayur Rustagi Ph: +1 (760) 203 3257

Re: Tools to manage workflows on Spark

2015-02-28 Thread Mayur Rustagi
We do maintain it but in apache repo itself. However Pig cannot do orchestration for you. I am not sure what you are looking at from Pig in this context. Regards, Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoid.com <http://www.sigmoidanalytics.com/> @mayur_rustagi <http://www.tw

Re: Can spark job server be used to visualize streaming data?

2015-02-13 Thread Mayur Rustagi
Frankly no good/standard way to visualize streaming data. So far I have found HBase as good intermediate store to store data from streams & visualize it by a play based framework & d3.js. Regards Mayur On Fri Feb 13 2015 at 4:22:58 PM Kevin (Sangwoo) Kim wrote: > I'm not very sure for CDH 5.3,

Re: Modifying an RDD in forEach

2014-12-06 Thread Mayur Rustagi
You'll benefit by viewing Matei's talk in Yahoo on Spark internals and how it optimizes execution of iterative jobs. Simple answer is 1. Spark doesn't materialize RDD when you do an iteration but lazily captures the transformation functions in RDD.(only function and closure , no data operation actu

Re: Streaming: getting total count over all windows

2014-11-13 Thread Mayur Rustagi
So if you want to do from beginning to end of time the interface is updateStatebykey, if only over a particular set of windows you can construct broader windows from smaller windows/batches. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.

Re: Communication between Driver and Executors

2014-11-13 Thread Mayur Rustagi
heavily, in spark Native application its rarely required. Do you have a usecase like that? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Nov 14, 2014 at 10:28 AM, Tobias Pfeiffer wrote: > Hi, > > (

Re: Joined RDD

2014-11-13 Thread Mayur Rustagi
First of all any action is only performed when you trigger a collect, When you trigger collect, at that point it retrieves data from disk joins the datasets together & delivers it to you. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twit

Re: flatMap followed by mapPartitions

2014-11-12 Thread Mayur Rustagi
flatmap would have to shuffle data only if output RDD is expected to be partitioned by some key. RDD[X].flatmap(X=>RDD[Y]) If it has to shuffle it should be local. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On

Re: Using partitioning to speed up queries in Shark

2014-11-07 Thread Mayur Rustagi
- dev list & + user list Shark is not officially supported anymore so you are better off moving to Spark SQL. Shark doesnt support Hive partitioning logic anyways, it has its version of partitioning on in-memory blocks but is independent of whether you partition your data in hive or not. M

Re: Why RDD is not cached?

2014-10-28 Thread Mayur Rustagi
What is the partition count of the RDD, its possible that you dont have enough memory to store the whole RDD on a single machine. Can you try forcibly repartitioning the RDD & then cacheing. Regards Mayur On Tue Oct 28 2014 at 1:19:09 AM shahab wrote: > I used Cache followed by a "count" on RDD

Re: input split size

2014-10-18 Thread Mayur Rustagi
Does it retain the order if its pulling from the hdfs blocks, meaning if file1 => a, b, c partition in order if I convert to 2 partition read will it map to ab, c or a, bc or it can also be a, cb ? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <

Re: rule engine based on spark

2014-10-14 Thread Mayur Rustagi
We are developing something similar on top of Streaming. Could you detail some rule functionality you are looking for. We are developing a dsl for data processing on top of streaming as well as static data enabled on Spark. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http

Re: JavaPairDStream saveAsTextFile

2014-10-08 Thread Mayur Rustagi
Thats a cryptic way to say thr should be a Jira for it :) Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Oct 9, 2014 at 11:46 AM, Sean Owen wrote: > Yeah it's not there. I imagine it was simply ne

Re: Setup/Cleanup for RDD closures?

2014-10-03 Thread Mayur Rustagi
Current approach is to use mappartition, initialize the connection in the beginning, iterate through the data & close off the connector. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Oct 3, 2014 a

Re: Spark Streaming for time consuming job

2014-10-01 Thread Mayur Rustagi
ct on it. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Tue, Sep 30, 2014 at 3:22 PM, Eko Susilo wrote: > Hi All, > > I have a problem that i would like to consult about spark streaming. >

Re: Processing multiple request in cluster

2014-09-25 Thread Mayur Rustagi
for 2. you can use fair scheduler so that application tasks can be scheduled more fairly. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Sep 25, 2014 at 12:32 PM, Akhil Das wrote: > You can

Re: Serving data

2014-09-13 Thread Mayur Rustagi
You can cache data in memory & query it using Spark Job Server.  Most folks dump data down to a queue/db for retrieval  You can batch up data & store into parquet partitions as well. & query it using another SparkSQL  shell, JDBC driver in SparkSQL is part 1.1 i believe.  -- Re

Re: Network requirements between Driver, Master, and Slave

2014-09-12 Thread Mayur Rustagi
into embedded driver model in yarn where the driver will also run inside the cluster & hence reliability & connectivity is a given.  -- Regards, Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi On Fri, Sep 12, 2014 at 6:46 PM, Jim Carroll wrote: > Hi

Re: single worker vs multiple workers on each machine

2014-09-11 Thread Mayur Rustagi
Another aspect to keep in mind is JVM above 8-10GB starts to misbehave. Typically better to split up ~ 15GB intervals. if you are choosing machines 10GB/Core is a approx to maintain. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.

Re: Spark Streaming and database access (e.g. MySQL)

2014-09-10 Thread Mayur Rustagi
I think she is checking for blanks? But if the RDD is blank then nothing will happen, no db connections etc. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Mon, Sep 8, 2014 at 1:32 PM, Tobias Pfeiffer wrote:

Re: Spark caching questions

2014-09-10 Thread Mayur Rustagi
Cached RDD do not survive SparkContext deletion (they are scoped on a per sparkcontext basis). I am not sure what you mean by disk based cache eviction, if you cache more RDD than disk space the result will not be very pretty :) Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com

Re: Records - Input Byte

2014-09-09 Thread Mayur Rustagi
What do you mean by "control your input”, are you trying to pace your spark streaming by number of words. If so that is not supported as of now, you can only control time & consume all files within that time period.  -- Regards, Mayur Rustagi Ph: +1 (760) 203

Re: how to choose right DStream batch interval

2014-09-07 Thread Mayur Rustagi
to go with 5 sec processing, alternative is to process data in two pipelines (.5 & 5 ) in two spark streaming jobs & overwrite results of one with the other. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On S

Re: Array and RDDs

2014-09-07 Thread Mayur Rustagi
her, you can also create a (node, bytearray) combo & join the two rdd together. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Sat, Sep 6, 2014 at 10:51 AM, Deep Pradhan wrote: > Hi, > I have an input

Re: Q: About scenarios where driver execution flow may block...

2014-09-07 Thread Mayur Rustagi
ation is not expected to alter anything apart from the RDD it is created upon, hence spark may not realize this dependency & try to parallelize the two operations, causing error . Bottom line as long as you make all your depedencies explicit in RDD, spark will take care of the magic. Mayur Ru

Re: Spark Streaming and database access (e.g. MySQL)

2014-09-07 Thread Mayur Rustagi
l replay updates to mysql & may cause data corruption. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Sun, Sep 7, 2014 at 11:54 AM, jchen wrote: > Hi, > > Has someone tried using Spark S

Update on Pig on Spark initiative

2014-08-27 Thread Mayur Rustagi
hesh Kalakoti (Sigmoid Analytics) Not to mention Spark & Pig communities. Regards Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi>

Re: Spark Streaming Output to DB

2014-08-26 Thread Mayur Rustagi
I would suggest you to use JDBC connector in mappartition instead of maps as JDBC connections are costly & can really impact your performance. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Tue, Aug 26, 2014

Re: DStream start a separate DStream

2014-08-22 Thread Mayur Rustagi
Why dont you directly use DStream created as output of windowing process? Any reason Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Aug 21, 2014 at 8:38 PM, Josh J wrote: > Hi, > >

Re: DStream cannot write to text file

2014-08-21 Thread Mayur Rustagi
is your hdfs running, can spark access it? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Aug 21, 2014 at 1:15 PM, cuongpham92 wrote: > I'm sorry, I just forgot "/data" after "hd

Re: DStream cannot write to text file

2014-08-21 Thread Mayur Rustagi
MyDStreamVariable.saveAsTextFile("hdfs://localhost:50075/data", "output") this shoudl work..is it throwing exception? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Aug 21, 2014 at 12

Re: spark - reading hfds files every 5 minutes

2014-08-21 Thread Mayur Rustagi
l teenagers = sqc.sql("SELECT * FROM data") teenagers.saveAsParquetFile("people.parquet") }) You can also try insertInto API instead of registerAsTable..but havnt used it myself.. also you need to dynamically change parquet file name for every dstream... Mayur Rusta

Re: Accessing to elements in JavaDStream

2014-08-21 Thread Mayur Rustagi
transform your way :) MyDStream.transform(RDD => RDD.map(wordChanger)) Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Aug 20, 2014 at 1:25 PM, cuongpham92 wrote: > Hi, > I am a newbie to Spark Stre

Re: DStream cannot write to text file

2014-08-20 Thread Mayur Rustagi
provide the fullpath of where to write( like hdfs:// etc) Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Aug 21, 2014 at 8:29 AM, cuongpham92 wrote: > Hi, > I tried to write to text file from DStr

Re: Mapping with extra arguments

2014-08-20 Thread Mayur Rustagi
: Int, number: Int) : Int = { if (number == 1) return accumulator else factorialWithAccumulator(accumulator * number, number - 1) } factorialWithAccumulator(1, number) } MyRDD.map(factorial(5)) Mayur Rustagi Ph: +1 (760) 203 3257 http

Re: Question regarding spark data partition and coalesce. Need info on my use case.

2014-08-16 Thread Mayur Rustagi
ence the task overhead of scheduling so many tasks mostly kills the performance. import org.apache.spark.RangePartitioner; var file=sc.textFile("") var partitionedFile=file.map(x=>(x,1)) var data= partitionedFile.partitionBy(new RangePartitioner(3, partitionedFile)) Mayur Rustagi

Re: Script to deploy spark to Google compute engine

2014-08-14 Thread Mayur Rustagi
We have a version that is submitted for PR https://github.com/sigmoidanalytics/spark_gce/tree/for_spark We are working on a more generic implementation based on lib_cloud... would love collaborate if you are interested.. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com

Re: Low Performance of Shark over Spark.

2014-08-08 Thread Mayur Rustagi
push down predicates smartly hence get better performance (similar to impala) 2. cache data at a partition level from Hive & operate on those instead. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi>

Re: Shared variable in Spark Streaming

2014-08-08 Thread Mayur Rustagi
You can also use Update by key interface to store this shared variable. As for count you can use foreachRDD to run counts on RDD & then store that as another RDD or put it in updatebykey Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twit

Re: Column width limits?

2014-08-06 Thread Mayur Rustagi
Spark breaks data across machines at partition level, so realistic limit is on the partition size. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Aug 7, 2014 at 8:41 AM, Daniel, Ronald (ELS-SDG) &

Re: Configuration setup and Connection refused

2014-08-05 Thread Mayur Rustagi
Then dont specify hdfs when you read file. Also the community is quite active in response in general, just be a little patient. Also if possible look at spark training as part of spark summit 2014 vids and/or amplabs training on spark website. Mayur Rustagi Ph: +1 (760) 203 3257 http

Re: Configuration setup and Connection refused

2014-08-05 Thread Mayur Rustagi
port/ip that spark cannot access Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Tue, Aug 5, 2014 at 11:33 PM, alamin.ishak wrote: > Hi, > Anyone? Any input would be much appreciated > > Thanks, > Ami

Re: How to read from OpenTSDB using PySpark (or Scala Spark)?

2014-08-01 Thread Mayur Rustagi
You can design a receiver to receive data every 5 sec (batch size) & pull data of last 5 sec from http API, you can shard data by time further within those 5 sec to distribute it further. You can also bind TSDB nodes to each receiver to translate HBase data to improve performance. Mayur Rus

Re: How to read from OpenTSDB using PySpark (or Scala Spark)?

2014-08-01 Thread Mayur Rustagi
Http Api would be the best bet, I assume by graph you mean the charts created by tsdb frontends. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Aug 1, 2014 at 4:48 PM, bumble123 wrote: > I'm trying

Re: Accumulator and Accumulable vs classic MR

2014-08-01 Thread Mayur Rustagi
Only blocker is accumulator can be only "added" to from slaves & only read on the master. If that constraint fit you well you can fire away. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Au

Re: RDD to DStream

2014-08-01 Thread Mayur Rustagi
sed back into the folder, its a hack but much less headache . Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Aug 1, 2014 at 10:21 AM, Aniket Bhatnagar < aniket.bhatna...@gmail.com> wrote: > Hi ever

Re: How to read from OpenTSDB using PySpark (or Scala Spark)?

2014-08-01 Thread Mayur Rustagi
What is the usecase you are looking at? Tsdb is not designed for you to query data directly from HBase, Ideally you should use REST API if you are looking to do thin analysis. Are you looking to do whole reprocessing of TSDB ? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com

Re: Installing Spark 0.9.1 on EMR Cluster

2014-08-01 Thread Mayur Rustagi
Have you tried https://aws.amazon.com/articles/Elastic-MapReduce/4926593393724923 Thr is also a 0.9.1 version they talked about in one of the meetups. Check out the s3 bucket inthe guide.. it should have a 0.9.1 version as well. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com

Re: persisting RDD in memory

2014-08-01 Thread Mayur Rustagi
Hi Yes data would be cached again in each spark context. Regards Mayur On Friday, August 1, 2014, Sujee Maniyam wrote: > Hi all, > I have a scenario of a web application submitting multiple jobs to Spark. > These jobs may be operating on the same RDD. > > It is possible to cache() the RDD durin

Re: The function of ClosureCleaner.clean

2014-07-28 Thread Mayur Rustagi
ence objects inside the class, so you may want to send across those objects but not the whole parent class. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Mon, Jul 28, 2014 at 8:28 PM, Wang, Jensen wrote:

Spark as a application library vs infra

2014-07-27 Thread Mayur Rustagi
Based on some discussions with my application users, I have been trying to come up with a standard way to deploy applications built on Spark 1. Bundle the version of spark with your application and ask users store it in hdfs before referring it in yarn to boot your application 2. Provide ways to

Re: What if there are large, read-only variables shared by all map functions?

2014-07-23 Thread Mayur Rustagi
Have a look at broadcast variables . On Tuesday, July 22, 2014, Parthus wrote: > Hi there, > > I was wondering if anybody could help me find an efficient way to make a > MapReduce program like this: > > 1) For each map function, it need access some huge files, which is around > 6GB > > 2) These

Re: persistent HDFS instance for cluster restarts/destroys

2014-07-23 Thread Mayur Rustagi
Yes you lose the data You can add machines but will require you to restart the cluster. Also adding is manual on you add nodes Regards Mayur On Wednesday, July 23, 2014, durga wrote: > Hi All, > I have a question, > > For my company , we are planning to use spark-ec2 scripts to create cluster >

Re: Spark job tracker.

2014-07-09 Thread Mayur Rustagi
val sem = 0 sc.addSparkListener(new SparkListener { override def onTaskStart(taskStart: SparkListenerTaskStart) { sem +=1 } }) sc = spark context Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rust

Re: Need advice to create an objectfile of set of images from Spark

2014-07-09 Thread Mayur Rustagi
RDD can only keep objects. How do you plan to encode these images so that they can be loaded. Keeping the whole image as a single object in 1 rdd would perhaps not be super optimized. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <ht

Re: Re: Pig 0.13, Spark, Spork

2014-07-09 Thread Mayur Rustagi
Also its far from bug free :) Let me know if you need any help to try it out. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Jul 9, 2014 at 12:58 PM, Akhil Das wrote: > Hi Bertrand, > > We've

Re: Filtering data during the read

2014-07-09 Thread Mayur Rustagi
Hi, Spark does that out of the box for you :) It compresses down the execution steps as much as possible. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Jul 9, 2014 at 3:15 PM, Konstantin Kudry

Re: Is the order of messages guaranteed in a DStream?

2014-07-07 Thread Mayur Rustagi
If you receive data through multiple receivers across the cluster. I don't think any order can be guaranteed. Order in distributed systems is tough. On Tuesday, July 8, 2014, Yan Fang wrote: > I know the order of processing DStream is guaranteed. Wondering if the > order of messages in one DStre

Re: Pig 0.13, Spark, Spork

2014-07-07 Thread Mayur Rustagi
That version is old :). We are not forking pig but cleanly separating out pig execution engine. Let me know if you are willing to give it a go. Also would love to know what features of pig you are using ? Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com

Re: Pig 0.13, Spark, Spork

2014-07-07 Thread Mayur Rustagi
Hi, We have fixed many major issues around Spork & deploying it with some customers. Would be happy to provide a working version to you to try out. We are looking for more folks to try it out & submit bugs. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanaly

Re: window analysis with Spark and Spark streaming

2014-07-05 Thread Mayur Rustagi
Key idea is to simulate your app time as you enter data . So you can connect spark streaming to a queue and insert data in it spaced by time. Easier said than done :). What are the parallelism issues you are hitting with your static approach. On Friday, July 4, 2014, alessandro finamore wrote: >

Re: Spark job tracker.

2014-07-04 Thread Mayur Rustagi
The application server doesnt provide json api unlike the cluster interface(8080). If you are okay to patch spark, you can use our patch to create json API, or you can use sparklistener interface in your application to get that info out. Mayur Rustagi Ph: +1 (760) 203 3257 http

Re: Visualize task distribution in cluster

2014-07-04 Thread Mayur Rustagi
You'll get most of that information from mesos interface. You may not get transfer of data information particularly. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Jul 3, 2014 at 6:28 AM, Tobias Pfei

Re: LIMIT with offset in SQL queries

2014-07-04 Thread Mayur Rustagi
What I typically do is use row_number & subquery to filter based on that. It works out pretty well, reduces the iteration. I think a offset solution based on windowsing directly would be useful. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi &l

Re: Spark memory optimization

2014-07-04 Thread Mayur Rustagi
work, so you may be hitting some of those walls too. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Jul 4, 2014 at 2:36 PM, Igor Pernek wrote: > Hi all! > > I have a folder with 150

Re: Serializer or Out-of-Memory issues?

2014-07-02 Thread Mayur Rustagi
Your executors are going out of memory & then subsequent tasks scheduled on the scheduler are also failing, hence the lost tid(task id). Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Mon, Jun 30, 2014 at 7:4

Re: Error: UnionPartition cannot be cast to org.apache.spark.rdd.HadoopPartition

2014-07-02 Thread Mayur Rustagi
two job context cannot share data, are you collecting the data to the master & then sending it to the other context? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Jul 2, 2014 at 11:57 AM, Honey Joshi

Re: Callbacks on freeing up of RDDs

2014-07-01 Thread Mayur Rustagi
A lot of RDD that you create in Code may not even be constructed as the tasks layer is optimized in the DAG scheduler.. The closest is onUnpersistRDD in SparkListner. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi>

Re: Help alleviating OOM errors

2014-07-01 Thread Mayur Rustagi
Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Mon, Jun 30, 2014 at 8:09 PM, Yana Kadiyska wrote: > Hi, > > our cluster seems to have a really hard time with OOM errors on the > executor. Periodical

Re: spark streaming counter metrics

2014-07-01 Thread Mayur Rustagi
You may be able to mix StreamingListener & SparkListener to get meaningful information about your task. however you need to connect a lot of pieces to make sense of the flow.. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_

Re: Help understanding spark.task.maxFailures

2014-07-01 Thread Mayur Rustagi
stragglers? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Tue, Jul 1, 2014 at 12:40 AM, Yana Kadiyska wrote: > Hi community, this one should be an easy one: > > I have left spark.task.maxFailures

Re: Error: UnionPartition cannot be cast to org.apache.spark.rdd.HadoopPartition

2014-07-01 Thread Mayur Rustagi
Ideally you should be converting RDD to schemardd ? You are creating UnionRDD to join across dstream rdd? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Tue, Jul 1, 2014 at 3:11 PM, Honey Joshi wrote: > H

Re: Lost TID: Loss was due to fetch failure from BlockManagerId

2014-07-01 Thread Mayur Rustagi
r in the worker log @ 192.168.222.164 or any of the machines where the crash log is displayed. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Jul 2, 2014 at 7:51 AM, Yana Kadiyska wrote: > A lot of things c

Re: Distribute data from Kafka evenly on cluster

2014-06-27 Thread Mayur Rustagi
how abou this? https://groups.google.com/forum/#!topic/spark-users/ntPQUZFJt4M Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Sat, Jun 28, 2014 at 10:19 AM, Tobias Pfeiffer wrote: > Hi, > > I h

Re: Map with filter on JavaRdd

2014-06-27 Thread Mayur Rustagi
It happens in a single operation itself. You may write it separately but the stages are performed together if its possible. You will see only one task in the output of your application. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.

Re: Spark job tracker.

2014-06-26 Thread Mayur Rustagi
You can use SparkListener interface to track the tasks.. another is to use JSON patch (https://github.com/apache/spark/pull/882) & track tasks with json api Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On F

Google Cloud Engine adds out of the box Spark/Shark support

2014-06-26 Thread Mayur Rustagi
https://groups.google.com/forum/#!topic/gcp-hadoop-announce/EfQms8tK5cE I suspect they are using thr own builds.. has anybody had a chance to look at it? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi>

Re: ElasticSearch enrich

2014-06-24 Thread Mayur Rustagi
s on RDD. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Jun 25, 2014 at 4:52 AM, Peng Cheng wrote: > I'm afraid persisting connection across two tasks is a dangerous act as > they > can't be

Re: ElasticSearch enrich

2014-06-24 Thread Mayur Rustagi
hether your class is safely serializable. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Jun 25, 2014 at 4:12 AM, boci wrote: > Hi guys, > > I have a small question. I want to create a "Worker&qu

Re: partitions, coalesce() and parallelism

2014-06-24 Thread Mayur Rustagi
To be clear number of map tasks are determined by number of partitions inside the rdd hence the suggestion by Nicholas. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Jun 25, 2014 at 4:17 AM, Nicholas C

Re: Questions regarding different spark pre-built packages

2014-06-24 Thread Mayur Rustagi
HDFS driver keeps changing & breaking compatibility, hence all the build versions. If you dont use HDFS/YARN then you can safely ignore it. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Tue, Jun 24, 2014 a

Re: How data is distributed while processing in spark cluster?

2014-06-24 Thread Mayur Rustagi
Using HDFS locality. The workers call for the data from hdfs/queue etc. Unless you use parallelize then its sent from driver (typically on the master) to the worker nodes. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi>

Re: How to Reload Spark Configuration Files

2014-06-24 Thread Mayur Rustagi
Not really. You are better off using a cluster manager like Mesos or Yarn for this. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Tue, Jun 24, 2014 at 11:35 AM, Sirisha Devineni < sirisha_devin...@persist

Re: balancing RDDs

2014-06-24 Thread Mayur Rustagi
This would be really useful. Especially for Shark where shift of partitioning effects all subsequent queries unless task scheduling time beats spark.locality.wait. Can cause overall low performance for all subsequent tasks. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com

Re: Efficiently doing an analysis with Cartesian product (pyspark)

2014-06-24 Thread Mayur Rustagi
api is in those. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Tue, Jun 24, 2014 at 3:33 AM, Aaron wrote: > Sorry, I got my sample outputs wrong > > (1,1) -> 400 > (1,2) -> 500 > (2,2)->

Re: Serialization problem in Spark

2014-06-24 Thread Mayur Rustagi
did you try to register the class in Kryo serializer? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Mon, Jun 23, 2014 at 7:00 PM, rrussell25 wrote: > Thanks for pointer...tried Kryo and ran into a stra

Re: Problems running Spark job on mesos in fine-grained mode

2014-06-24 Thread Mayur Rustagi
Hi Sebastien, Are you using Pyspark by any chance, is that working for you (post the patch?) Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Mon, Jun 23, 2014 at 1:51 PM, Fedechicco wrote: > I'm ge

Re: Kafka Streaming - Error Could not compute split

2014-06-24 Thread Mayur Rustagi
I have seen this when I prevent spilling of shuffle data on disk. Can you change shuffle memory fraction. Is your data spilling to disk? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Mon, Jun 23, 2014 at 12

Re: Persistent Local Node variables

2014-06-24 Thread Mayur Rustagi
in it for quite a long time you can - Simplistically store it as hdfs & load it each time - Either store that in a table & try to pull it with sparksql every time(experimental). - Use Ooyala Jobserver to cache the data & do all processing using that. Regards Mayur Mayur Rus

Re: Set the number/memory of workers under mesos

2014-06-21 Thread Mayur Rustagi
> On Fri, Jun 20, 2014 at 1:40 PM, Mayur Rustagi > wrote: > >> You should be able to configure in spark context in Spark shell. >> spark.cores.max & memory. >> Regards >> Mayur >> >> Mayur Rustagi >> Ph: +1 (760) 203 3257 >> http://www.sig

Re: How to terminate job from the task code?

2014-06-21 Thread Mayur Rustagi
You can terminate job group from spark context, Youll have to send across the spark context to your task. On 21 Jun 2014 01:09, "Piotr Kołaczkowski" wrote: > If the task detects unrecoverable error, i.e. an error that we can't > expect to fix by retrying nor moving the task to another node, how

Re: Set the number/memory of workers under mesos

2014-06-20 Thread Mayur Rustagi
You should be able to configure in spark context in Spark shell. spark.cores.max & memory. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Jun 20, 2014 at 4:30 PM, Shuo Xiang wrote:

Re: Spark and RDF

2014-06-20 Thread Mayur Rustagi
or a seperate RDD for sparql operations ala SchemaRDD .. operators for sparql can be defined thr.. not a bad idea :) Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Jun 20, 2014 at 3:56 PM, andy petrella

Re: Spark and RDF

2014-06-20 Thread Mayur Rustagi
You are looking to create Shark operators for RDF? Since Shark backend is shifting to SparkSQL it would be slightly hard but much better effort would be to shift Gremlin to Spark (though a much beefier one :) ) Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi

Re: Running Spark alongside Hadoop

2014-06-20 Thread Mayur Rustagi
spark demand resources from same machines etc. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Jun 20, 2014 at 3:41 PM, Sameer Tilak wrote: > Dear Spark users, > > I have a small 4 node Hadoop cluster. Eac

Re: Possible approaches for adding extra metadata (Spark Streaming)?

2014-06-20 Thread Mayur Rustagi
You can apply transformations on RDD's inside Dstreams using transform or any number of operations. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Jun 20, 2014 at 2:16 PM, Shrikar ar

Re: specifying fields for join()

2014-06-13 Thread Mayur Rustagi
You can resolve the columns to create keys using them.. then join. Is that what you did? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Thu, Jun 12, 2014 at 9:24 PM, SK wrote: > This issue is

Re: multiple passes in mapPartitions

2014-06-13 Thread Mayur Rustagi
Sorry if this is a dumb question but why not several calls to map-partitions sequentially. Are you looking to avoid function serialization or is your function damaging partitions? Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rust

Re: Master not seeing recovered nodes("Got heartbeat from unregistered worker ....")

2014-06-13 Thread Mayur Rustagi
I have also had trouble in worker joining the working set. I have typically moved to Mesos based setup. Frankly for high availability you are better off using a cluster manager. Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rust

Re: list of persisted rdds

2014-06-13 Thread Mayur Rustagi
val myRdds = sc.getPersistentRDDs assert(myRdds.size === 1) It'll return a map. Its pretty old 0.8.0 onwards. Regards Mayur Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Fri, Jun 13, 2014 at 9

  1   2   3   >