Hi,
the operation “stream.union(stream.map(id))” is equivalent to
“stream.union(stream)” isn’t it? So it might also duplicate the data.
- Christoph
> On 25 Nov 2015, at 11:24, Stephan Ewen wrote:
>
> "stream.union(stream.map(..))" should definitely be possible. Not sure why
> this is not per
@Till: Yes I’m running the job on cloud-11 or better to say I’m using the yarn
cluster and the flink-yarn package. I’m using flink-0.9-SNAPSHOT from the
following commit [1] together with Timos patch [2]. I’ll send you a separate
email with instructions where you can find the jars on cloud-11.
I might add that the error only occurs when running with the RemoteExecutor
regardless of the number of TM. Starting the job in IntelliJ with the
LocalExecutor with dop 1 works just fine.
Best,
Christoph
On 28 Jan 2015, at 12:17, Bruecke, Christoph
wrote:
> Hi Robert,
>
> thank
JobManager?
> If your YARN cluster has log aggregation activated, you can retrieve the
> logs of a stopped YARN session using:
> yarn logs -applicationId
>
> watch out for the jobmanager-main.log or so file.
>
> I suspect that there has been an exception on the JobManager.
>
&g
Hi,
I have written a job that reads a SequenceFile from HDFS using the
Hadoop-Compatibility add-on. Doing so results in a TimeoutException. I’m using
flink-0.9-SNAPSHOT with PR 342 ( https://github.com/apache/flink/pull/342 ).
Furthermore I’m running flink on yarn with two TM using
flink-yarn-