Re: [Spark][Scheduler] Spark DAGScheduler scheduling performance hindered on JobSubmitted Event

2018-03-06 Thread Reynold Xin
It's mostly just hash maps from some ids to some state, and those can be replaced just with concurrent hash maps? (I haven't actually looked at code and am just guessing based on recollection.) On Tue, Mar 6, 2018 at 10:42 AM, Shivaram Venkataraman < shiva...@eecs.berkeley.edu> wrote: > The prob

Re: [Spark][Scheduler] Spark DAGScheduler scheduling performance hindered on JobSubmitted Event

2018-03-06 Thread Shivaram Venkataraman
The problem with doing work in the callsite thread is that there are a number of data structures that are updated during job submission and these data structures are guarded by the event loop ensuring only one thread accesses them. I dont think there is a very easy fix for this given the structure

Re: Silencing messages from Ivy when calling spark-submit

2018-03-06 Thread Bryan Cutler
Cool, hopefully it will work. I don't know what setting that would be though, but it seems like it might be somewhere under here http://ant.apache.org/ivy/history/latest-milestone/settings/outputters.html. It's pretty difficult to sort through the docs, and I often found myself looking at the sour

Re: [Spark][Scheduler] Spark DAGScheduler scheduling performance hindered on JobSubmitted Event

2018-03-06 Thread Ryan Blue
I agree with Reynold. We don't need to use a separate pool, which would have the problem you raised about FIFO. We just need to do the planning outside of the scheduler loop. The call site thread sounds like a reasonable place to me. On Mon, Mar 5, 2018 at 12:56 PM, Reynold Xin wrote: > Rather t