On Tue, Jun 9, 2015 at 11:31 AM, Matt Kapilevich <matve...@gmail.com> wrote:
> Like I mentioned earlier, I'm able to execute Hadoop jobs fine even now - > this problem is specific to Spark. > That doesn't necessarily mean anything. Spark apps have different resource requirements than Hadoop apps. Check your RM logs for any line that mentions your Spark app id. That may give you some insight into what's happening or not. -- Marcelo