Github user rehevkor5 commented on the issue:
https://github.com/apache/flink/pull/2051
Hi, it's great to see that someone is working on this stuff!
I just wanted to put in my two cents, to provide a different perspective
that might change how you are thinking about this.
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2051
Good from my side.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user uce commented on the issue:
https://github.com/apache/flink/pull/2051
Now that we have forked off the 1.1 release branch, I would like to merge
this if there are no objections. There are not many changes to our current code
base and the follow ups can be addressed until th
Github user uce commented on the issue:
https://github.com/apache/flink/pull/2051
@soniclavier I think this is not configurable in Flink at the moment. The
client uses the `LeaderRetrievalService` to retrieve the job manager path.
@soumyasd I hope to merge this after the 1.1
Github user soumyasd commented on the issue:
https://github.com/apache/flink/pull/2051
Any idea which Flink version this feature is going live with ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does n
Github user soniclavier commented on the issue:
https://github.com/apache/flink/pull/2051
One more question, is it possible to configure the JobManager Actor path
that the client connects to, it looks like it default to
`akka://flink/user/jobmanager`.
In that way I can create a m
Github user soniclavier commented on the issue:
https://github.com/apache/flink/pull/2051
Sorry, the compilation error was because the Tuple2 was scala.Tuple2 not
flink Tuple2. Changing to `org.apache.flink.api.java.tuple.Tuple2` fixed the
issue.
---
If your project is set up for it
Github user soniclavier commented on the issue:
https://github.com/apache/flink/pull/2051
Thanks Ufuk & Stephen for the reply,
I tried the serializers suggested by you
```
val typeHint = new TypeHint[Tuple2[Long,String]](){}
val serializer = TypeInformation.of(typeH
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/2051
A simpler way to get the serializer may be
```java
TypeInformation.of(new TypeHint>(){}).createSerializer(null);
```
---
If your project is set up for it, you can reply to this email
Github user uce commented on the issue:
https://github.com/apache/flink/pull/2051
Regarding local vs. cluster mode: that's on purpose, but we can certainly
change that behaviour. For now, you would have to run in cluster mode.
Regarding the serializer: assuming that it is a F
Github user soniclavier commented on the issue:
https://github.com/apache/flink/pull/2051
Never mind, I was hitting with wrong key, it works now! Cheers.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user soniclavier commented on the issue:
https://github.com/apache/flink/pull/2051
Hi,
Continuing the discussion from the mailing list, I was able to go past the
NettyConfig problem once I ran Flink in cluster mode ( I would still like to
know if there is a way to run
12 matches
Mail list logo