Hi Jürgen,
You can set the timeout in the configuration by this key
"akka.ask.timeout", and the current default value is 10 s. Hope it can help you.
cheers,zhijiang
--发件人:Jürgen
Thomann 发送时间:2017年4月12日(星期三) 19:04收件人:user
主 题:
Thanks Chesnay, this will work.
Best,
Tarandeep
On Wed, Apr 12, 2017 at 2:42 AM, Chesnay Schepler
wrote:
> Hello,
>
> what i can do is add hook like we do for the ClusterBuilder with which you
> can provide a set of options that will
> be used for every call to the mapper. This would provide yo
Greetings,
Is there a means of maintaining a stream's partitioning after running it
through an operation such as map or filter?
I have a pipeline stage S that operates on a stream partitioned by an ID
field. S flat maps objects of type A to type B, which both have an "ID"
field, and where each in
I'm not sure whether it needs to be absolute, but you apparently gave it an
absolute file path (starting with "/") so that's where it is looking for the
file.
Nico
On Wednesday, 12 April 2017 18:04:36 CEST Kaepke, Marc wrote:
> Hi Nico,
>
> so I need an absolute file path? Because I moved the
Hi Nico,
so I need an absolute file path? Because I moved the file into my project and
IntelliJ did an autocomplete for the file path
How can I use a relative file path? Because I work on two different systems
Marc
> Am 11.04.2017 um 10:19 schrieb Nico Kruber :
>
> Hi Marc,
> the file path d
Niels, are you still facing this issue?
As far as I understood it, the security changes in Flink 1.2.0 use a new
Kerberos mechanism that allows infinite token renewal.
On Thu, Mar 17, 2016 at 7:30 AM, Maximilian Michels wrote:
> Hi Niels,
>
> Thanks for the feedback. As far as I know, Hadoop de
Hi,
We currently get the following exception if we cancel a job which writes
to Hadoop:
ERROR org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink
- Error while trying to hflushOrSync! java.io.InterruptedIOException:
Interrupted while waiting for data to be acknowledged by pipelin
Hello,
what i can do is add hook like we do for the ClusterBuilder with which
you can provide a set of options that will
be used for every call to the mapper. This would provide you access with
all options that are listed on the page
you linked.
You can find an implementation of this here:
h