Frankly no good/standard way to visualize streaming data. So far I have
found HBase as good intermediate store to store data from streams &
visualize it by a play based framework & d3.js.
Regards
Mayur
On Fri Feb 13 2015 at 4:22:58 PM Kevin (Sangwoo) Kim
wrote:
> I'm not very sure for CDH 5.3,
We do maintain it but in apache repo itself. However Pig cannot do
orchestration for you. I am not sure what you are looking at from Pig in
this context.
Regards,
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoid.com <http://www.sigmoidanalytics.com/>
@mayur_rustagi <http://www.tw
Sorry not really. Spork is a way to migrate your existing pig scripts to
Spark or write new pig jobs then can execute on spark.
For orchestration you are better off using Oozie especially if you are
using other execution engines/systems besides spark.
Regards,
Mayur Rustagi
Ph: +1 (760) 203 3257
So are you using Java 7 or 8.
7 doesnt clean closures properly. So you need to define a static class as a
function & then call that in your operations. Else it'll try to send the
whole class along with the function.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanal
You'll have to restart the cluster.. create copy of your existing slave..
add it to slave files in master & restart the cluster
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Jun 3, 2014 at 4
Did you use docker or plain lxc specifically?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Jun 3, 2014 at 1:40 PM, MrAsanjar . wrote:
> thanks guys, that fixed my problem. As you might have noticed,
ed "this$0", and thus the closure cleaner doesn't
find it and can't "clean" it properly.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jun 4, 2014 at 4:18 PM, nilmish wrote:
&g
You can look to create a Dstream directly from S3 location using file
stream. If you want to use any specific logic you can rely on Queuestream &
read data yourself from S3, process it & push it into RDDQueue.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur
Where are you getting serialization error. Its likely to be a different
problem. Which class is not getting serialized?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Jun 5, 2014 at 6:32 PM, Vibhor Banga
y... This
may or may not work for you depending on internals of Mongodb client.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jun 4, 2014 at 10:27 PM, Samarth Mailinglist <
mailinglistsama...@gmail.com&g
And then a "are you sure" after that :)
On 7 Jun 2014 06:59, "Mikhail Strebkov" wrote:
> Nick Chammas wrote
> > I think it would be better to have the kill link flush right, leaving a
> > large amount of space between it the stage detail link.
>
> I think even better would be to have a pop-up con
.
>
> Thanks
> Gianluca
>
> On 06/06/2014 02:19, Mayur Rustagi wrote:
>
> You can look to create a Dstream directly from S3 location using file
> stream. If you want to use any specific logic you can rely on Queuestream &
> read data yourself from S3, process it
QueueStream example is in Spark Streaming examples:
http://www.boyunjian.com/javasrc/org.spark-project/spark-examples_2.9.3/0.7.2/_/spark/streaming/examples/QueueStream.scala
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rust
Are you able to use HadoopInputoutput reader for hbase in new hadoop Api
reader?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Jun 12, 2014 at 7:49 AM, gaurav.dasgupta
wrote:
> Is there anyone el
val myRdds = sc.getPersistentRDDs
assert(myRdds.size === 1)
It'll return a map. Its pretty old 0.8.0 onwards.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Jun 13, 2014 at 9
I have also had trouble in worker joining the working set. I have typically
moved to Mesos based setup. Frankly for high availability you are better
off using a cluster manager.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rust
Sorry if this is a dumb question but why not several calls to
map-partitions sequentially. Are you looking to avoid function
serialization or is your function damaging partitions?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rust
You can resolve the columns to create keys using them.. then join. Is that
what you did?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Jun 12, 2014 at 9:24 PM, SK wrote:
> This issue is
You can apply transformations on RDD's inside Dstreams using transform or
any number of operations.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Jun 20, 2014 at 2:16 PM, Shrikar ar
spark
demand resources from same machines etc.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Jun 20, 2014 at 3:41 PM, Sameer Tilak wrote:
> Dear Spark users,
>
> I have a small 4 node Hadoop cluster. Eac
You are looking to create Shark operators for RDF? Since Shark backend is
shifting to SparkSQL it would be slightly hard but much better effort would
be to shift Gremlin to Spark (though a much beefier one :) )
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi
or a seperate RDD for sparql operations ala SchemaRDD .. operators for
sparql can be defined thr.. not a bad idea :)
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Jun 20, 2014 at 3:56 PM, andy petrella
You should be able to configure in spark context in Spark shell.
spark.cores.max & memory.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Jun 20, 2014 at 4:30 PM, Shuo Xiang wrote:
You can terminate job group from spark context, Youll have to send across
the spark context to your task.
On 21 Jun 2014 01:09, "Piotr Kołaczkowski" wrote:
> If the task detects unrecoverable error, i.e. an error that we can't
> expect to fix by retrying nor moving the task to another node, how
> On Fri, Jun 20, 2014 at 1:40 PM, Mayur Rustagi
> wrote:
>
>> You should be able to configure in spark context in Spark shell.
>> spark.cores.max & memory.
>> Regards
>> Mayur
>>
>> Mayur Rustagi
>> Ph: +1 (760) 203 3257
>> http://www.sig
in it for quite a long time you can
- Simplistically store it as hdfs & load it each time
- Either store that in a table & try to pull it with sparksql every
time(experimental).
- Use Ooyala Jobserver to cache the data & do all processing using that.
Regards
Mayur
Mayur Rus
I have seen this when I prevent spilling of shuffle data on disk. Can you
change shuffle memory fraction. Is your data spilling to disk?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Mon, Jun 23, 2014 at 12
Hi Sebastien,
Are you using Pyspark by any chance, is that working for you (post the
patch?)
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Mon, Jun 23, 2014 at 1:51 PM, Fedechicco wrote:
> I'm ge
did you try to register the class in Kryo serializer?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Mon, Jun 23, 2014 at 7:00 PM, rrussell25 wrote:
> Thanks for pointer...tried Kryo and ran into a stra
api is in those.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Jun 24, 2014 at 3:33 AM, Aaron wrote:
> Sorry, I got my sample outputs wrong
>
> (1,1) -> 400
> (1,2) -> 500
> (2,2)->
This would be really useful. Especially for Shark where shift of
partitioning effects all subsequent queries unless task scheduling time
beats spark.locality.wait. Can cause overall low performance for all
subsequent tasks.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
Not really. You are better off using a cluster manager like Mesos or Yarn
for this.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Jun 24, 2014 at 11:35 AM, Sirisha Devineni <
sirisha_devin...@persist
Using HDFS locality. The workers call for the data from hdfs/queue etc.
Unless you use parallelize then its sent from driver (typically on the
master) to the worker nodes.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
HDFS driver keeps changing & breaking compatibility, hence all the build
versions. If you dont use HDFS/YARN then you can safely ignore it.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Jun 24, 2014 a
To be clear number of map tasks are determined by number of partitions
inside the rdd hence the suggestion by Nicholas.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jun 25, 2014 at 4:17 AM, Nicholas C
hether your class is safely serializable.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jun 25, 2014 at 4:12 AM, boci wrote:
> Hi guys,
>
> I have a small question. I want to create a "Worker&qu
s on RDD.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jun 25, 2014 at 4:52 AM, Peng Cheng wrote:
> I'm afraid persisting connection across two tasks is a dangerous act as
> they
> can't be
https://groups.google.com/forum/#!topic/gcp-hadoop-announce/EfQms8tK5cE
I suspect they are using thr own builds.. has anybody had a chance to look
at it?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
You can use SparkListener interface to track the tasks.. another is to use
JSON patch (https://github.com/apache/spark/pull/882) & track tasks with
json api
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On F
It happens in a single operation itself. You may write it separately but
the stages are performed together if its possible. You will see only one
task in the output of your application.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.
how abou this?
https://groups.google.com/forum/#!topic/spark-users/ntPQUZFJt4M
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Sat, Jun 28, 2014 at 10:19 AM, Tobias Pfeiffer wrote:
> Hi,
>
> I h
r in the worker log @ 192.168.222.164
or any of the machines where the crash log is displayed.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jul 2, 2014 at 7:51 AM, Yana Kadiyska
wrote:
> A lot of things c
Ideally you should be converting RDD to schemardd ?
You are creating UnionRDD to join across dstream rdd?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Jul 1, 2014 at 3:11 PM, Honey Joshi wrote:
> H
stragglers?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Jul 1, 2014 at 12:40 AM, Yana Kadiyska
wrote:
> Hi community, this one should be an easy one:
>
> I have left spark.task.maxFailures
You may be able to mix StreamingListener & SparkListener to get meaningful
information about your task. however you need to connect a lot of pieces to
make sense of the flow..
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Mon, Jun 30, 2014 at 8:09 PM, Yana Kadiyska
wrote:
> Hi,
>
> our cluster seems to have a really hard time with OOM errors on the
> executor. Periodical
A lot of RDD that you create in Code may not even be constructed as the
tasks layer is optimized in the DAG scheduler.. The closest is onUnpersistRDD
in SparkListner.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
two job context cannot share data, are you collecting the data to the
master & then sending it to the other context?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jul 2, 2014 at 11:57 AM, Honey Joshi
Your executors are going out of memory & then subsequent tasks scheduled on
the scheduler are also failing, hence the lost tid(task id).
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Mon, Jun 30, 2014 at 7:4
work, so you
may be hitting some of those walls too.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Jul 4, 2014 at 2:36 PM, Igor Pernek wrote:
> Hi all!
>
> I have a folder with 150
What I typically do is use row_number & subquery to filter based on that.
It works out pretty well, reduces the iteration. I think a offset solution
based on windowsing directly would be useful.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi &l
You'll get most of that information from mesos interface. You may not get
transfer of data information particularly.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Jul 3, 2014 at 6:28 AM, Tobias Pfei
The application server doesnt provide json api unlike the cluster
interface(8080).
If you are okay to patch spark, you can use our patch to create json API,
or you can use sparklistener interface in your application to get that info
out.
Mayur Rustagi
Ph: +1 (760) 203 3257
http
Key idea is to simulate your app time as you enter data . So you can
connect spark streaming to a queue and insert data in it spaced by time.
Easier said than done :). What are the parallelism issues you are hitting
with your static approach.
On Friday, July 4, 2014, alessandro finamore
wrote:
>
Hi,
We have fixed many major issues around Spork & deploying it with some
customers. Would be happy to provide a working version to you to try out.
We are looking for more folks to try it out & submit bugs.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanaly
That version is old :).
We are not forking pig but cleanly separating out pig execution engine. Let
me know if you are willing to give it a go.
Also would love to know what features of pig you are using ?
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
If you receive data through multiple receivers across the cluster. I don't
think any order can be guaranteed. Order in distributed systems is tough.
On Tuesday, July 8, 2014, Yan Fang wrote:
> I know the order of processing DStream is guaranteed. Wondering if the
> order of messages in one DStre
Hi,
Spark does that out of the box for you :)
It compresses down the execution steps as much as possible.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jul 9, 2014 at 3:15 PM, Konstantin Kudry
Also its far from bug free :)
Let me know if you need any help to try it out.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Jul 9, 2014 at 12:58 PM, Akhil Das
wrote:
> Hi Bertrand,
>
> We've
RDD can only keep objects. How do you plan to encode these images so that
they can be loaded. Keeping the whole image as a single object in 1 rdd
would perhaps not be super optimized.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <ht
val sem = 0
sc.addSparkListener(new SparkListener {
override def onTaskStart(taskStart: SparkListenerTaskStart) {
sem +=1
}
})
sc = spark context
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rust
Yes you lose the data
You can add machines but will require you to restart the cluster. Also
adding is manual on you add nodes
Regards
Mayur
On Wednesday, July 23, 2014, durga wrote:
> Hi All,
> I have a question,
>
> For my company , we are planning to use spark-ec2 scripts to create cluster
>
Have a look at broadcast variables .
On Tuesday, July 22, 2014, Parthus wrote:
> Hi there,
>
> I was wondering if anybody could help me find an efficient way to make a
> MapReduce program like this:
>
> 1) For each map function, it need access some huge files, which is around
> 6GB
>
> 2) These
Based on some discussions with my application users, I have been trying to come
up with a standard way to deploy applications built on Spark
1. Bundle the version of spark with your application and ask users store it in
hdfs before referring it in yarn to boot your application
2. Provide ways to
ence objects inside the class, so you may want to send across those
objects but not the whole parent class.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Mon, Jul 28, 2014 at 8:28 PM, Wang, Jensen wrote:
Hi
Yes data would be cached again in each spark context.
Regards
Mayur
On Friday, August 1, 2014, Sujee Maniyam wrote:
> Hi all,
> I have a scenario of a web application submitting multiple jobs to Spark.
> These jobs may be operating on the same RDD.
>
> It is possible to cache() the RDD durin
Have you tried
https://aws.amazon.com/articles/Elastic-MapReduce/4926593393724923
Thr is also a 0.9.1 version they talked about in one of the meetups.
Check out the s3 bucket inthe guide.. it should have a 0.9.1 version as
well.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
What is the usecase you are looking at?
Tsdb is not designed for you to query data directly from HBase, Ideally you
should use REST API if you are looking to do thin analysis. Are you looking
to do whole reprocessing of TSDB ?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
sed back into the folder, its a hack but much less headache .
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Aug 1, 2014 at 10:21 AM, Aniket Bhatnagar <
aniket.bhatna...@gmail.com> wrote:
> Hi ever
Only blocker is accumulator can be only "added" to from slaves & only read
on the master. If that constraint fit you well you can fire away.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Au
Http Api would be the best bet, I assume by graph you mean the charts
created by tsdb frontends.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Fri, Aug 1, 2014 at 4:48 PM, bumble123 wrote:
> I'm trying
You can design a receiver to receive data every 5 sec (batch size) & pull
data of last 5 sec from http API, you can shard data by time further within
those 5 sec to distribute it further.
You can also bind TSDB nodes to each receiver to translate HBase data to
improve performance.
Mayur Rus
port/ip that
spark cannot access
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Aug 5, 2014 at 11:33 PM, alamin.ishak
wrote:
> Hi,
> Anyone? Any input would be much appreciated
>
> Thanks,
> Ami
Then dont specify hdfs when you read file.
Also the community is quite active in response in general, just be a little
patient.
Also if possible look at spark training as part of spark summit 2014 vids
and/or amplabs training on spark website.
Mayur Rustagi
Ph: +1 (760) 203 3257
http
Spark breaks data across machines at partition level, so realistic limit is
on the partition size.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Aug 7, 2014 at 8:41 AM, Daniel, Ronald (ELS-SDG) &
You can also use Update by key interface to store this shared variable. As
for count you can use foreachRDD to run counts on RDD & then store that as
another RDD or put it in updatebykey
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twit
push down predicates smartly hence get
better performance (similar to impala)
2. cache data at a partition level from Hive & operate on those instead.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
We have a version that is submitted for PR
https://github.com/sigmoidanalytics/spark_gce/tree/for_spark
We are working on a more generic implementation based on lib_cloud... would
love collaborate if you are interested..
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
ence the
task overhead of scheduling so many tasks mostly kills the performance.
import org.apache.spark.RangePartitioner;
var file=sc.textFile("")
var partitionedFile=file.map(x=>(x,1))
var data= partitionedFile.partitionBy(new RangePartitioner(3, partitionedFile))
Mayur Rustagi
: Int, number: Int) : Int = {
if (number == 1)
return accumulator
else
factorialWithAccumulator(accumulator * number, number - 1)
}
factorialWithAccumulator(1, number)
}
MyRDD.map(factorial(5))
Mayur Rustagi
Ph: +1 (760) 203 3257
http
provide the fullpath of where to write( like hdfs:// etc)
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Aug 21, 2014 at 8:29 AM, cuongpham92 wrote:
> Hi,
> I tried to write to text file from DStr
transform your way :)
MyDStream.transform(RDD => RDD.map(wordChanger))
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Wed, Aug 20, 2014 at 1:25 PM, cuongpham92 wrote:
> Hi,
> I am a newbie to Spark Stre
l teenagers = sqc.sql("SELECT * FROM data")
teenagers.saveAsParquetFile("people.parquet")
})
You can also try insertInto API instead of registerAsTable..but havnt used
it myself..
also you need to dynamically change parquet file name for every dstream...
Mayur Rusta
MyDStreamVariable.saveAsTextFile("hdfs://localhost:50075/data", "output")
this shoudl work..is it throwing exception?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Aug 21, 2014 at 12
is your hdfs running, can spark access it?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Aug 21, 2014 at 1:15 PM, cuongpham92 wrote:
> I'm sorry, I just forgot "/data" after "hd
Why dont you directly use DStream created as output of windowing process?
Any reason
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Aug 21, 2014 at 8:38 PM, Josh J wrote:
> Hi,
>
>
I would suggest you to use JDBC connector in mappartition instead of maps
as JDBC connections are costly & can really impact your performance.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Aug 26, 2014
hesh Kalakoti (Sigmoid Analytics)
Not to mention Spark & Pig communities.
Regards
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
l replay updates to mysql & may cause data corruption.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Sun, Sep 7, 2014 at 11:54 AM, jchen wrote:
> Hi,
>
> Has someone tried using Spark S
ation is not
expected to alter anything apart from the RDD it is created upon, hence
spark may not realize this dependency & try to parallelize the two
operations, causing error . Bottom line as long as you make all your
depedencies explicit in RDD, spark will take care of the magic.
Mayur Ru
her, you can also create a (node,
bytearray) combo & join the two rdd together.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Sat, Sep 6, 2014 at 10:51 AM, Deep Pradhan
wrote:
> Hi,
> I have an input
to go with 5
sec processing, alternative is to process data in two pipelines (.5 & 5 )
in two spark streaming jobs & overwrite results of one with the other.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On S
What do you mean by "control your input”, are you trying to pace your spark
streaming by number of words. If so that is not supported as of now, you can
only control time & consume all files within that time period.
--
Regards,
Mayur Rustagi
Ph: +1 (760) 203
Cached RDD do not survive SparkContext deletion (they are scoped on a per
sparkcontext basis).
I am not sure what you mean by disk based cache eviction, if you cache more
RDD than disk space the result will not be very pretty :)
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
I think she is checking for blanks?
But if the RDD is blank then nothing will happen, no db connections etc.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Mon, Sep 8, 2014 at 1:32 PM, Tobias Pfeiffer wrote:
Another aspect to keep in mind is JVM above 8-10GB starts to misbehave.
Typically better to split up ~ 15GB intervals.
if you are choosing machines 10GB/Core is a approx to maintain.
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.
into embedded driver model
in yarn where the driver will also run inside the cluster & hence reliability &
connectivity is a given.
--
Regards,
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi
On Fri, Sep 12, 2014 at 6:46 PM, Jim Carroll
wrote:
> Hi
You can cache data in memory & query it using Spark Job Server.
Most folks dump data down to a queue/db for retrieval
You can batch up data & store into parquet partitions as well. & query it using
another SparkSQL shell, JDBC driver in SparkSQL is part 1.1 i believe.
--
Re
for 2. you can use fair scheduler so that application tasks can be
scheduled more fairly.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Thu, Sep 25, 2014 at 12:32 PM, Akhil Das
wrote:
> You can
ct on it.
Regards
Mayur
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>
On Tue, Sep 30, 2014 at 3:22 PM, Eko Susilo
wrote:
> Hi All,
>
> I have a problem that i would like to consult about spark streaming.
>
1 - 100 of 265 matches
Mail list logo