Hi Oytun,
I think there is a regression introduced in 1.8 how we handle output
tags. The problem is we do not call ClosureCleaner on OutputTag.
There are two options how you can workaround this issue:
1. Declare the OutputTag static
2. Clean the closure explicitly as Guowei suggested:
StreamExe
Hi, Ning
From the log message you given, the two operate share the same directory, and
when snapshot, the directory will be deleted first if it
exists(RocksIncrementalSnapshotStrategy#prepareLocalSnapshotDirectory).
I did not find an issue for this problem, and I don’t thinks this is a problem
Hi Kant,
I'm afraid Konstantin is right. Unfortunately AFAIK there is no active
development on that issue.
Best,
Dawid
On 22/04/2019 18:20, Konstantin Knauf wrote:
> Hi Kant,
>
> as far as I know, no one is currently working on this. Dawid (cc)
> maybe knows more.
>
> Cheers,
>
> Konstantin
>
>
We have a Flink job using RocksDB state backend. We found that one of the
RichMapFunction state was not being saved in checkpoints or savepoints. After
some digging, it seems that two operators in the same operator chain are
colliding with each other during checkpoint or savepoint, resulting in one
Hi Xixi,
I'm not sure I have understood correctly, do you mean you would like to set
the "kinesis consumer" name?
Does the API "StreamExecutionEnvironment.addSource($KinesisConsumer,
$sourceName)" satisfy this?
Xixi Li 于2019年4月23日周二 上午3:25写道:
>
> Hi
>
> I have a question about how we can set up
Thanks. We find the relevant nodemanager log and figured out the lost task
manager killed by the yarn due to memory limit. @zhijiang
@Biao Liu Thanks for your
help.
On Sun, Apr 21, 2019 at 11:45 PM zhijiang
wrote:
> Hi Wenrui,
>
> I think you could trace the log of node manager which contains
Thanks, I feel I'm getting closer to the truth.
So parallelism is the cause? Say my parallelism is 2. Does that mean I get 2
tasks running after `keyBy` if even all elements have the same key so go to 1
down stream(say task 1)? And it is the other task(task 2) with no incoming data
that caused
Hi
I have a question about how we can set up a kinesis consumer with a
specified applicationName, we are currently using flink-connector-kinesis
version 1.5, but we cant find any where to set up the applicationName. Thank
you very much!
Regards,
Xixi
--
Sent from: http://apache-flink-user-ma
Hi Kant,
as far as I know, no one is currently working on this. Dawid (cc) maybe
knows more.
Cheers,
Konstantin
On Sat, Apr 20, 2019 at 12:12 PM kant kodali wrote:
> Hi All,
>
> There seems to be a lot of interest for
> https://issues.apache.org/jira/browse/FLINK-7129
>
> Any rough idea on th
Hello,
We are experiencing regular backpressure (at least once a week). I would like
to ask how we can fix it.
Currently our configurations are:
vi /usr/lib/flink/conf/flink-conf.yaml
# Settings applied by Cloud Dataproc initialization action
jobmanager.rpc.address: bonusengine-prod-m-0
jobmanag
Hi,
If it helps, we're using Lightbend's Config for that:
* https://github.com/lightbend/config
*
https://www.stubbornjava.com/posts/environment-aware-configuration-with-typesafe-config
Thanks,
Rafi
On Wed, Apr 17, 2019 at 7:07 AM Andy Hoang wrote:
> I have 3 different files for env: test, s
Congxian,
Thanks for the reply. I will try to get a minimum reproducer and post it to this
thread soon.
Ning
On Sun, 21 Apr 2019 09:27:12 -0400,
Congxian Qiu wrote:
>
> Hi,
> From the given error message, this seems flink can't open RocksDB because
> of the number of column family mismatch, do
Hi,
Answering to myself in case someone else is interested as well. As per
https://aws.amazon.com/blogs/big-data/build-and-run-streaming-applications-with-apache-flink-and-amazon-kinesis-data-analytics-for-java-applications/
it
does autoscaling itself, but in order to change parallelism it takes
s
13 matches
Mail list logo