2.02.2018 18:37, Ken Krugler wrote:
Hi Jürgen,
On Feb 2, 2018, at 6:24 AM, Jürgen Thomann
<mailto:juergen.thom...@innogames.com>> wrote:
Hi,
I'm currently using a ProcessFunction after a keyBy() and can't find
a way to get the key.
Doesn’t your keyBy() take a field (positio
Hi,
I'm currently using a ProcessFunction after a keyBy() and can't find a
way to get the key. I'm currently storing it in a ValueState
within processElement and set it all the time, so that I can access it
in onTimer(). Is there a better way to get the key? We are using Flink
1.3 at the mome
Can you use wget (curl will work as well)? You can find the taskmanagers
with wget -O - http://localhost:8081/taskmanagers
and wget -O - http://localhost:8081/taskmanagers/request> to see detailed jvm
memory stats. localhost:8081 is in my example the jobmanager.
On 04.11.2017 16:19, AndreaKinn
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
--
Jürgen Thomann
Software Developer
InnoGames GmbH
Friesenstraße 13 - 20097 Hamburg - Germany
Tel +49 40 7889335-0
Managing Directors: Hendrik Klindworth, Michael Zillmer
VAT-ID: DE264068907 Amtsgericht Hamburg, HRB 10
Hi,
You can set it to G1GC with the following setting. In my example it is
only for the taskmanager, but env.java.opts should work in the same way.
env.java.opts.taskmanager: -XX:+UseG1GC
iling List archive. mailing
list archive at Nabble.com.
>
--
Urs Schönenberger - urs.schoenenber...@tngtech.com
<mailto:urs.schoenenber...@tngtech.com>
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahl
/streaming/connectors/fs/bucketing/BucketingSink.java
On 27. Apr 2017, at 10:27, Jürgen Thomann
<mailto:juergen.thom...@innogames.com>> wrote:
Hi,
I had some time ago problems with writing data to Hadoop with the
BucketingSink and losing data in case of cancel with savepoint
because f
Hi,
I had some time ago problems with writing data to Hadoop with the
BucketingSink and losing data in case of cancel with savepoint because
flush/sync command was interrupted. I tried changing Hadoop settings as
suggested but had no luck at the end and looked into the Flink code. If
I unders
07:05, Zhijiang(wangzhijiang999) wrote:
Hi Jürgen,
You can set the timeout in the configuration by this key
"akka.ask.timeout", and the current default value is 10 s. Hope it can
help you.
cheers,
zhijiang
--
发件人
Hi,
We currently get the following exception if we cancel a job which writes
to Hadoop:
ERROR org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink
- Error while trying to hflushOrSync! java.io.InterruptedIOException:
Interrupted while waiting for data to be acknowledged by pipelin
Hi Robert,
Do you already had a chance to look on it? If you need more information
just let me know.
Regards,
Jürgen
On 12.10.2016 21:12, Jürgen Thomann wrote:
Hi Robert,
Thanks for your suggestions. We are using the DataStream API and I
tried it with disabling it completely, but that
g more serialization
overhead and network traffic).
If my suggestions don't help, can you post a screenshot of your job
plan (from the web interface) here, so that we see what operations you
are performing?
Regards,
Robert
On Wed, Oct 12, 2016 at 12:52 PM, Jürgen Thomann
mailto:juer
Hi,
we currently have an issue with Flink, as it allocates many tasks to the
same task manager and as a result it overloads it. I reduced the amount
of task slots per task manager (keeping the CPU count) and added some
more servers but that did not help to distribute the load.
Is there some
13 matches
Mail list logo