Re: Data processing with HDFS local or remote

2019-10-21 Thread Pritam Sadhukhan
y is > supported by default. > > Thanks, > Zhu Zhu > > Pritam Sadhukhan 于2019年10月21日周一 上午10:17写道: > >> Hi Zhu Zhu, >> >> Thanks for your detailed answer. >> Can you please help me to understand how flink task process the data >> locally on data nod

Re: Submitting jobs via REST

2019-10-21 Thread Pritam Sadhukhan
le /tmp/flink-web-/flink-web-upload/ does not exist". > > It is looking for the jar in the tmp folder. Wonder of there is a way to > change that so that it looks in the right folder. > > Thanks > > Tim > > On Sun, Oct 20, 2019, 7:55 AM Pritam Sadhukhan > wrote: &g

Re: Unable to change job manager port when launching session cluster on Docker

2019-10-21 Thread Pritam Sadhukhan
The problem as I understand is your system port 8081 is already in use, so you want to bind a different port of local system to the container's 8081 port. Please use :8081 in your docker compose to map your local port to container port. Else, you may edit your /opt/flink/conf/flink-conf.yaml to ch

Re: Data processing with HDFS local or remote

2019-10-20 Thread Pritam Sadhukhan
st data can be processed locally. > > Thanks, > Zhu Zhu > > Pritam Sadhukhan 于2019年10月18日周五 上午10:59写道: > >> Hi, >> >> I am trying to process data stored on HDFS using flink batch jobs. >> Our data is splitted into 16 data nodes. >> >> I am curious

Re: Submitting jobs via REST

2019-10-20 Thread Pritam Sadhukhan
Hi Tim, I have the similar scenario where I have embedded my jar within the image. I used the following command to submit the job : curl -X POST http://localhost:8081/jars/.jar/run with the req

Data processing with HDFS local or remote

2019-10-17 Thread Pritam Sadhukhan
Hi, I am trying to process data stored on HDFS using flink batch jobs. Our data is splitted into 16 data nodes. I am curious to know how data will be pulled from the data nodes with the same number of parallelism set as the data split on HDFS i.e. 16. Is the flink task being executed locally on