Re: RepartitionByKey Behavior

2018-06-26 Thread Chawla,Sumit
Thanks everyone. As Nathan suggested, I ended up collecting the distinct keys first and then assigning Ids to each key explicitly. Regards Sumit Chawla On Fri, Jun 22, 2018 at 7:29 AM, Nathan Kronenfeld < nkronenfeld@uncharted.software> wrote: > On Thu, Jun 21, 2018 at 4:51 PM, Cha

Re: RepartitionByKey Behavior

2018-06-21 Thread Chawla,Sumit
Based on code read it looks like Spark does modulo of key for partition. Keys of c and b end up pointing to same value. Whats the best partitioning scheme to deal with this? Regards Sumit Chawla On Thu, Jun 21, 2018 at 4:51 PM, Chawla,Sumit wrote: > Hi > > I have been trying to th

RepartitionByKey Behavior

2018-06-21 Thread Chawla,Sumit
Hi I have been trying to this simple operation. I want to land all values with one key in same partition, and not have any different key in the same partition. Is this possible? I am getting b and c always getting mixed up in the same partition. rdd = sc.parallelize([('a', 5), ('d', 8), ('b

OutOfDirectMemoryError for Spark 2.2

2018-03-05 Thread Chawla,Sumit
Hi All I have a job which processes a large dataset. All items in the dataset are unrelated. To save on cluster resources, I process these items in chunks. Since chunks are independent of each other, I start and shut down the spark context for each chunk. This allows me to keep DAG smaller a

Re: What is correct behavior for spark.task.maxFailures?

2017-04-24 Thread Chawla,Sumit
ly the same thing. The default for that setting is >> 1h instead of 0. It’s better to have a non-zero default to avoid what >> you’re seeing. >> >> rb >> ​ >> >> On Fri, Apr 21, 2017 at 1:32 PM, Chawla,Sumit >> wrote: >> >>> I am seeing a

What is correct behavior for spark.task.maxFailures?

2017-04-21 Thread Chawla,Sumit
I am seeing a strange issue. I had a bad behaving slave that failed the entire job. I have set spark.task.maxFailures to 8 for my job. Seems like all task retries happen on the same slave in case of failure. My expectation was that task will be retried on different slave in case of failure, and

Unique Partition Id per partition

2017-01-31 Thread Chawla,Sumit
Hi All I have a rdd, which i partition based on some key, and then can sc.runJob for each partition. Inside this function, i assign each partition a unique key using following: "%s_%s" % (id(part), int(round(time.time())) This is to make sure that, each partition produces separate bookeeping st

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-26 Thread Chawla,Sumit
will also go through >> mesos, it's better to tune it lower, otherwise mesos could become the >> bottleneck. >> >> spark.task.maxDirectResultSize >> >> On Mon, Dec 19, 2016 at 3:23 PM, Chawla,Sumit >> wrote: >> > Tim, >> > >> &

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
g around solved in Dynamic > >> Resource Allocation? Is there some timeout after which Idle executors > can > >> just shutdown and cleanup its resources. > > > > Yes, that's exactly what dynamic allocation does. But again I have no > idea > > what t

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
lob/v1.6.3/docs/running-on-mesos.md) > and spark.task.cpus (https://github.com/apache/spark/blob/v1.6.3/docs/ > configuration.md) > > On Mon, Dec 19, 2016 at 12:09 PM, Chawla,Sumit > wrote: > >> Ah thanks. looks like i skipped reading this *"Neither will executors >>

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
Dec 19, 2016 at 11:26 AM, Timothy Chen wrote: > >> Hi Chawla, >> >> One possible reason is that Mesos fine grain mode also takes up cores >> to run the executor per host, so if you have 20 agents running Fine >> grained executor it will take up 20 cores while it

Re: Mesos Spark Fine Grained Execution - CPU count

2016-12-19 Thread Chawla,Sumit
ng the > fine-grained scheduler, and no one seemed too dead-set on keeping it. I'd > recommend you move over to coarse-grained. > > On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit > wrote: > >> Hi >> >> I am using Spark 1.6. I have one query about Fine Grained

Mesos Spark Fine Grained Execution - CPU count

2016-12-16 Thread Chawla,Sumit
Hi I am using Spark 1.6. I have one query about Fine Grained model in Spark. I have a simple Spark application which transforms A -> B. Its a single stage application. To begin the program, It starts with 48 partitions. When the program starts running, in mesos UI it shows 48 tasks and 48 CPUs a

Re: Output Side Effects for different chain of operations

2016-12-15 Thread Chawla,Sumit
n that you can run > arbitrary code after all. > > > On Thu, Dec 15, 2016 at 11:33 AM, Chawla,Sumit > wrote: > >> Any suggestions on this one? >> >> Regards >> Sumit Chawla >> >> >> On Tue, Dec 13, 2016 at 8:31 AM, Chawla,Sumit >> w

Re: Output Side Effects for different chain of operations

2016-12-15 Thread Chawla,Sumit
Any suggestions on this one? Regards Sumit Chawla On Tue, Dec 13, 2016 at 8:31 AM, Chawla,Sumit wrote: > Hi All > > I have a workflow with different steps in my program. Lets say these are > steps A, B, C, D. Step B produces some temp files on each executor node. > How can i a

Output Side Effects for different chain of operations

2016-12-13 Thread Chawla,Sumit
Hi All I have a workflow with different steps in my program. Lets say these are steps A, B, C, D. Step B produces some temp files on each executor node. How can i add another step E which consumes these files? I understand the easiest choice is to copy all these temp files to any shared locatio