Thanks for opening this ticket and I would watch it.
Flink does not handle OOM issue specially. I remembered we ever discussed the
similar issue before but forgot the conclusion then or have other concerns for
it.
I am not sure whether it is worth to fix atm, maybe Till or Chesnay could give
a
Hi Zhijiang
Thank you for your analysis. I agree with it. The solution may be to let tm
exit like you mentioned when any type of oom occurs, because the flink has
no control on a tm when a oom occurs.
I fired a jira before, https://issues.apache.org/jira/browse/FLINK-12889.
Don't know it is wort
Hi Syed
You could use 'mvn clean package -pl :flink-streaming-java_2.11 -DskipTests
-am' to build flink-streaming-java and flink-runtime modules. If the 'already
built binary' means the flink-dist-*.jar package, the former mvn command would
not update the dist jar package. As far as I know, a q
You need to specify flink-dist in -pl. Module flink-dist will build the
flink binary distribution.
syed 于2019年6月25日周二 上午9:14写道:
> Hi;
> I am trying to modify some core functionalities of flink for my through
> understanding about flink. I already build the flink from source, now I am
> looking
Hi,
I am using flink version 1.7.2 , I am trying to use S3 like object
storage EMC ECS(
https://www.emc.com/techpubs/ecs/ecs_s3_supported_features-1.htm). Not
all S3 apis are supported by EMC ESC according to this document. Here
is my config
s3.endpoint: SU73ECSG1P1d.***.COM
s3.access-key: vdna_n
Hi;
I am trying to modify some core functionalities of flink for my through
understanding about flink. I already build the flink from source, now I am
looking to build only a few modules which I have modified. Is this possible,
or every time I have to build the flink in full (all modules). As it t
Hi Ken,
Thanks for reaching out, I created a compliant bucket with name
aip-featuretoolkit. I now get the exception "Unable to execute HTTP
request: aip-featuretoolkit.SU73ECSG1P1d.***.COM: Name or service not
known" from
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.Invoker.class
l
Dear Community,
I am using Flink (processing-time) timers along with a Process Function.
What I would like to do is to "postpone" eventually registered timers for
the given key: I would like to do it since I might process plenty of events
in a row (think about it as a session) so that I will able t
Hi Shu Su,
the first point exactly pinpointed the issue I bumped into. I forgot to put
that dependency to "provided". Thank you!
Il giorno lun 24 giu 2019 alle ore 05:35 Shu Su ha
scritto:
> Hi Andrea
>
> Actually It’s caused by Flink’s ClassLoader. It’s because flink use
> parent Classloade
I posted my related observation here in a separated thread.
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Does-making-synchronize-call-might-choke-the-whole-pipeline-tc28383.html
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
private static void doWork(long tid) throws InterruptedException
{
if (!sortedTid.contains(tid)) {
sortedTid.add(tid);
}
// simulate a straggler, make the thread with the lowest tid a
slow
processor
if (
Hi,
What kind of function do you use to implement the operator that has the
blocking call?
Did you have a look at the AsyncIO operator? It was designed for exactly
such use cases.
It issues multiple asynchronous requests to an external service and waits
for the response.
Best, Fabian
Am Mo., 24.
Fabian,
Does the above stack trace looks like a deadlock?
at
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:539)
- locked <0x0007baf84040> (a java.util.ArrayDeque)
at
org.apache.flink.runtime.io.netwo
Fabian,
Thank you for replying.
If I understand your previous comment correctly, I setup up a consumer with
parallelism 1 and connect a worker task with parallelism 2.
If worker thread one is making a block call and stuck for 60s, the consumer
thread should continue fetching from the partition
Ah, that's great!
Thanks for letting us know :-)
Am Mo., 24. Juni 2019 um 11:33 Uhr schrieb Wouter Zorgdrager <
w.d.zorgdra...@tudelft.nl>:
> Hi Fabian,
>
> Thanks for your reply. I managed to resolve this issue. Actually this
> behavior was not so unexpected, I messed up using xStream as a 'base
Hi Fabian,
Thanks for your reply. I managed to resolve this issue. Actually this
behavior was not so unexpected, I messed up using xStream as a 'base' while
I needed to use yStream as a 'base'. I.e. yStream.element - 60 min <=
xStream.element <= yStream.element + 30 min. Interchanging both datastr
Hi Wouter,
Not sure what is going wrong there, but something that you could try is to
use a custom watemark assigner and always return a watermark of 0.
When the source finished serving the watermarks, it emits a final
Long.MAX_VALUE watermark.
Hence the join should consume all events and store th
Hi Ben,
Flink's Kafka consumers track their progress independent of any worker.
They keep track of the reading offset for themselves (committing progress
to Kafka is optional and only necessary to have progress monitoring in
Kafka's metrics).
As soon as a consumer reads and forwards an event, it i
19 matches
Mail list logo