Thanks everyone. As Nathan suggested, I ended up collecting the distinct
keys first and then assigning Ids to each key explicitly.
Regards
Sumit Chawla
On Fri, Jun 22, 2018 at 7:29 AM, Nathan Kronenfeld <
nkronenfeld@uncharted.software> wrote:
> On Thu, Jun 21, 2018 at 4:51 PM, Cha
Based on code read it looks like Spark does modulo of key for partition.
Keys of c and b end up pointing to same value. Whats the best partitioning
scheme to deal with this?
Regards
Sumit Chawla
On Thu, Jun 21, 2018 at 4:51 PM, Chawla,Sumit
wrote:
> Hi
>
> I have been trying to th
Hi
I have been trying to this simple operation. I want to land all values
with one key in same partition, and not have any different key in the same
partition. Is this possible? I am getting b and c always getting mixed
up in the same partition.
rdd = sc.parallelize([('a', 5), ('d', 8), ('b
Hi All
I have a job which processes a large dataset. All items in the dataset are
unrelated. To save on cluster resources, I process these items in
chunks. Since chunks are independent of each other, I start and shut down
the spark context for each chunk. This allows me to keep DAG smaller a
ly the same thing. The default for that setting is
>> 1h instead of 0. It’s better to have a non-zero default to avoid what
>> you’re seeing.
>>
>> rb
>>
>>
>> On Fri, Apr 21, 2017 at 1:32 PM, Chawla,Sumit
>> wrote:
>>
>>> I am seeing a
I am seeing a strange issue. I had a bad behaving slave that failed the
entire job. I have set spark.task.maxFailures to 8 for my job. Seems like
all task retries happen on the same slave in case of failure. My
expectation was that task will be retried on different slave in case of
failure, and
Hi All
I have a rdd, which i partition based on some key, and then can sc.runJob
for each partition.
Inside this function, i assign each partition a unique key using following:
"%s_%s" % (id(part), int(round(time.time()))
This is to make sure that, each partition produces separate bookeeping st
will also go through
>> mesos, it's better to tune it lower, otherwise mesos could become the
>> bottleneck.
>>
>> spark.task.maxDirectResultSize
>>
>> On Mon, Dec 19, 2016 at 3:23 PM, Chawla,Sumit
>> wrote:
>> > Tim,
>> >
>> &
g around solved in Dynamic
> >> Resource Allocation? Is there some timeout after which Idle executors
> can
> >> just shutdown and cleanup its resources.
> >
> > Yes, that's exactly what dynamic allocation does. But again I have no
> idea
> > what t
lob/v1.6.3/docs/running-on-mesos.md)
> and spark.task.cpus (https://github.com/apache/spark/blob/v1.6.3/docs/
> configuration.md)
>
> On Mon, Dec 19, 2016 at 12:09 PM, Chawla,Sumit
> wrote:
>
>> Ah thanks. looks like i skipped reading this *"Neither will executors
>>
Dec 19, 2016 at 11:26 AM, Timothy Chen wrote:
>
>> Hi Chawla,
>>
>> One possible reason is that Mesos fine grain mode also takes up cores
>> to run the executor per host, so if you have 20 agents running Fine
>> grained executor it will take up 20 cores while it
ng the
> fine-grained scheduler, and no one seemed too dead-set on keeping it. I'd
> recommend you move over to coarse-grained.
>
> On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit
> wrote:
>
>> Hi
>>
>> I am using Spark 1.6. I have one query about Fine Grained
Hi
I am using Spark 1.6. I have one query about Fine Grained model in Spark.
I have a simple Spark application which transforms A -> B. Its a single
stage application. To begin the program, It starts with 48 partitions.
When the program starts running, in mesos UI it shows 48 tasks and 48 CPUs
a
n that you can run
> arbitrary code after all.
>
>
> On Thu, Dec 15, 2016 at 11:33 AM, Chawla,Sumit
> wrote:
>
>> Any suggestions on this one?
>>
>> Regards
>> Sumit Chawla
>>
>>
>> On Tue, Dec 13, 2016 at 8:31 AM, Chawla,Sumit
>> w
Any suggestions on this one?
Regards
Sumit Chawla
On Tue, Dec 13, 2016 at 8:31 AM, Chawla,Sumit
wrote:
> Hi All
>
> I have a workflow with different steps in my program. Lets say these are
> steps A, B, C, D. Step B produces some temp files on each executor node.
> How can i a
Hi All
I have a workflow with different steps in my program. Lets say these are
steps A, B, C, D. Step B produces some temp files on each executor node.
How can i add another step E which consumes these files?
I understand the easiest choice is to copy all these temp files to any
shared locatio
16 matches
Mail list logo