Hi Eric,
> does the socket only get opened on the master node and then the stream is
partitioned out to the worker nodes?
No, the socket are opened on the worker. As for the socket example, Flink
starts a source task as a worker to ingest data.
What's more, you can use setParallelism() method to c
If I have a standalone cluster running flink, what is the best way to
ingest multiple streams of the same type of data?
For example, if I open a socket text stream, does the socket only get
opened on the master node and then the stream is partitioned out to the
worker nodes?
DataStream text = env