Normally, you should be able to directly execute your Beam program from within your IDE. It automatically starts a local cluster with the resources needed for the job.
Which Beam version are you using? Could you post some of the code your executing? -Max On Sat, Oct 8, 2016 at 7:51 PM, Dayong <will...@gmail.com> wrote: > You are able to run directly in IDE. Make sure you have right jar and port > open. You can try examples shipped with flink source. > > Thanks, > Dayong > > On Oct 8, 2016, at 1:35 PM, Abdul Salam Shaikh <abd.salam.sha...@gmail.com> > wrote: > > Is it necessary to compile a flink program into a jar and deploy it using > command line or we can directly execute the main class in IDE's like > Netbeans. > > While trying to execute from the IDE I am always getting the following > exception: > > Caused by: org.apache.flink.runtime.client.JobExecutionException: Job > execution failed. > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply$mcV$sp(JobManager.scala:563) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply(JobManager.scala:509) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply(JobManager.scala:509) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > > Caused by: > org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: > Not enough free slots available to run the job. You can decrease the > operator parallelism or increase the number of slots per TaskManager in the > configuration. Task to schedule: < Attempt #0 (Source: Custom Source (1/1)) > @ (unassigned) - [SCHEDULED] > with groupID < > 95b239d1777b2baf728645df9a1c4232 > in sharing group < SlotSharingGroup > [772c9ff1cf0b6cb3a361e3352f75fcee, d4f856f13654f424d7c49d0f00f6ecca, > 81bb8c4310faefe32f97ebd6baa4c04f, 95b239d1777b2baf728645df9a1c4232] >. > Resources available to scheduler: Number of instances=0, total number of > slots=0, available slots=0 > at > org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255) > at > org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131) > at > org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298) > at > org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458) > at > org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322) > at > org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:982) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:962) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:962) > ... 8 more > > > -- > Thanks & Regards, > > Abdul Salam Shaikh >