Hi Rong, thanks for the hint - that solved the issue.
On 20 Aug 2019, 0:06 +0300, Rong Rong , wrote:
> Hi Itamar,
>
> The problem you described sounds similar to this ticket[1].
> Can you try to see if following the solution listed resolves your issue?
>
> --
> Rong
>
> [1] https://issues.apache.or
Hi Paul & Jark,
Thanks for your feedbacks!
I also think of putting the content in the email but hesitate on
where it should be sent to(user-zh only IMO), what kind of thread
it should be sorted to([ANNOUNCE] or just normal thread), and how
to format to fit the email form.
It is reasonable to hav
Hi Vishwas,
If you mean to have your application logs to its configured appenders in
client, i think you could use your own FLINK_CONF_DIR environment.
Otherwise, we could not update the log4j/logback configuration.[1]
[1]
https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin
Hi Zili,
+1 for the Chinese Weekly Community Update.
I think this will categorical attract more Chinese users.
Btw, could you also put the content of Chinese Weekly Updates in the email?
I think this will be more align with the Apache Way.
So that we can help to response users who have interesting
Hi Min,
I guess you use standalone high-availability and when TM fails,
JM can recovered the job from an in-memory checkpoint store.
However, when JM fails, since you don't persist state on ha backend
such as ZooKeeper, even JM relaunched by YARN RM superseded by a
stand by, the new one knows not
Thanks, Gordon, will do tomorrow.
---
Oytun Tez
*M O T A W O R D*
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.com
On Mon, Aug 19, 2019 at 7:21 PM Tzu-Li (Gordon) Tai
wrote:
> Hi!
>
> Voting on RC3 for Apache Flink 1.9.0 has started:
>
> http://apache-flin
Hi!
Voting on RC3 for Apache Flink 1.9.0 has started:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-1-9-0-release-candidate-3-td31988.html
Please check this out if you want to verify your applications against this
new Flink release.
Cheers,
Gordon
-- F
Hi Itamar,
The problem you described sounds similar to this ticket[1].
Can you try to see if following the solution listed resolves your issue?
--
Rong
[1] https://issues.apache.org/jira/browse/FLINK-12399
On Mon, Aug 19, 2019 at 8:56 AM Itamar Ravid wrote:
> Hi, I’m facing a strange issue wi
Thanks for the answer, Congxian!
On Sun, Aug 18, 2019 at 10:43 PM Congxian Qiu
wrote:
> Hi
>
> Currently, we can't change a running job's checkpoint timeout, but there
> is an issue[1] which wants to set a separate timeout for savepoint.
>
> [1] https://issues.apache.org/jira/browse/FLINK-9465
>
You can use the Timestamp Assigner / Watermark Generator in two different
ways: Per Kafka Partition or per parallel source.
I would usually recommend per Kafka Partition, because if the read position
in the partitions drifts apart (for example some partitions are read at the
tail, some are read a
Hi there,
We've started to witness ConsumerCancelledException errors from our
RabbitMQ source. We've digged in everywhere, yet couldn't come up with a
good explanation.
This is the exception:
com.rabbitmq.client.ConsumerCancelledException
at
com.rabbitmq.client.QueueingConsumer.handle(Q
We are on 1.8 as of now will give "stop with savepoint" a try once we
upgrade.
I am trying to cancel the job with savepoint and restore it back again.
I think there is an issue with how our s3 lifecycle is configured. Thank
you for your help.
On Sun, Aug 18, 2019 at 8:10 AM Stephan Ewen wrote:
Thank you Taher, We are not on EMR but great to know that s3 and streaming
sink should be working fine based on your explanation.
On Sun, Aug 18, 2019 at 8:23 AM taher koitawala wrote:
> Hi Swapnil,
>We faced this problem once, I think changing checkpoint dir to hdfs
> and keeping sink d
Hello Rafi,
Thank you for getting back. We have lifecycle rule setup for the sink and
not the s3 bucket for savepoints. This was my initial hunch too but we
tried restarting the job immediately after canceling them and it failed.
Best,
Swapnil Kumar
On Sat, Aug 17, 2019 at 2:23 PM Rafi Aroch wr
Hi, I’m facing a strange issue with Flink 1.8.1. I’ve implemented a
StreamTableSource that implements FilterableTableSource and
ProjectableTableSource. However, I’m seeing that during the logical plan
optimization (TableEnvironment.scala:288), the applyPredicates method is called
but the result
Hi Min,
> Do I need to set up zookeepers to keep the states when a job manager
crashes?
I guess you need to set up the HA [1] properly. Besides that, I would
suggest you should also check the state backend.
1.
https://ci.apache.org/projects/flink/flink-docs-master/ops/jobmanager_high_availabilit
Wich kind of deployment system are you using,
Standalone ,yarn ... Other ?
On Mon, Aug 19, 2019, 18:28 wrote:
> Hi,
>
>
>
> I can use check points to recover Flink states when a task manger crashes.
>
>
>
> I can not use check points to recover Flink states when a job manger
> crashes.
>
>
>
> D
Flink's sliding window didn't work well for our use case at SAP as the
checkpointing freezes with 288 sliding windows per tenant. Implementing
sliding window through tumbling window / process function reduces the
checkpointing time to few seconds. We will see how that scales with 1000 or
more tenan
Hi,
I can use check points to recover Flink states when a task manger crashes.
I can not use check points to recover Flink states when a job manger crashes.
Do I need to set up zookeepers to keep the states when a job manager crashes?
Regards
Min
E-mails can involve SUBSTANTIAL RISKS, e.g. l
Great!
Thanks for the feedback.
Cheers, Fabian
Am Mo., 19. Aug. 2019 um 17:12 Uhr schrieb Ahmad Hassan <
ahmad.has...@gmail.com>:
>
> Thank you Fabian. This works really well.
>
> Best Regards,
>
> On Fri, 16 Aug 2019 at 09:22, Fabian Hueske wrote:
>
>> Hi Ahmad,
>>
>> The ProcessFunction shou
Thank you Fabian. This works really well.
Best Regards,
On Fri, 16 Aug 2019 at 09:22, Fabian Hueske wrote:
> Hi Ahmad,
>
> The ProcessFunction should not rely on new records to come (i..e, do the
> processsing in the onElement() method) but rather register a timer every 5
> minutes and perform
Hi,
I have a logback for my flink application which is packaged with the
application fat jar. However when I submit my job from flink command line
tool, I see that logback is set to
-Dlogback.configurationFile=file:/data/flink-1.7.2/conf/logback.xml from
the client log.
As a result my application
the DataStream API should fully subsume the DataSet API (through bounded
streams) in the long run [1]
And you can consider use Table/SQL API in your project.
[1]
https://flink.apache.org/roadmap.html#analytics-applications-and-the-roles-of-datastream-dataset-and-table-api
*Best Regards,*
*Zhenghu
I'm late to the party... Welcome and congrats! :-)
– Ufuk
On Mon, Aug 19, 2019 at 9:26 AM Andrey Zagrebin
wrote:
> Hi Everybody!
>
> Thanks a lot for the warn welcome!
> I am really happy about joining Flink committer team and hope to help the
> project to grow more.
>
> Cheers,
> Andrey
>
> O
Hi Everybody!
Thanks a lot for the warn welcome!
I am really happy about joining Flink committer team and hope to help the
project to grow more.
Cheers,
Andrey
On Fri, Aug 16, 2019 at 11:10 AM Terry Wang wrote:
> Congratulations Andrey!
>
> Best,
> Terry Wang
>
>
>
> 在 2019年8月15日,下午9:27,Hequn
25 matches
Mail list logo