Thanks for the response.
That is a bit surprising that it is always a new instance given the various
API signatures that take in a Configuration instance. The best practices
docs (
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/best_practices.html#using-the-parameters-in-your-flin
Hi,
The passing in of a Configuration instance in the open method is actually a
leftover artifact of the DataStream API that remains only due to API backwards
compatibility reasons.
There’s actually no way to modify what configuration is retrieved there (and it
is actually always a new empty Co
Hi Chesnay,
I didn't try it in 1.4, so I have no idea if this also occurs in 1.4.
For my setting for logging, It have already set to INFO level, but there
wasn't any error or warning in log file as well.
Best Regards,
Tony Wei
2017-09-22 22:07 GMT+08:00 Chesnay Schepler :
> The Prometheus repor
Thanks for the reply. Well, tracing back to the root cause, I see the
following:
1. At the Job manager, the Checkpoint times are getting worse :
Jobmanager :
Checkpoint times are getting worse progressively.
2017-09-16 05:05:50,813 INFO
org.apache.flink.runtime.checkpoint.CheckpointCoordinator
The Prometheus reporter should work with 1.3.2.
Does this also occur with the reporter that currently exists in 1.4? (to
rule out new bugs from the PR).
To investigate this further, please set the logging level to WARN and
try again, as all errors in the metric system are logged on that level
Hi Rahul!
1. Will FLink-Kafka consumer 0.8x run on multiple task slots or a single task
slot? Basically I mean if its going to be a parallel operation or a non
parallel operation?
Yes, the FlinkKafkaConsumer is a parallel consumer.
2. If its a parallel operation, then do multiple task slots rea
Hello everyone,
I'd like to use the HiveBolt from storm-hive inside a flink job using the
Flink-Storm compatibility layer but I'm not sure how to integrate it. Let
me explain, I would have the following:
val mapper = ...
val hiveOptions = ...
streamByID
.transform[OUT]("hive-sink", new BoltWr
@Eron Yes, that would be the difference in characterisation. I think
technically all sources could be transformed by that by pushing data into a
(blocking) queue and having the "getElement()" method pull from that.
> On 15. Sep 2017, at 20:17, Elias Levy wrote:
>
> On Fri, Sep 15, 2017 at 10:0
Thanks for letting us know!
> On 18. Sep 2017, at 11:36, PedroMrChaves wrote:
>
> Hello,
>
> Sorry for the delay.
>
> The buffer memory of the Kafka consumer was piling up. Once I updated to the
> 1.3.2 version the problem no longer occurred.
>
> Pedro.
>
>
>
> -
> Best Regards,
> Ped
Hi,
I have just started working with FLINK and I am working on a project which
involves reading KAFKA data and processing it. Following questions came to
my mind:
1. Will FLink-Kafka consumer 0.8x run on multiple task slots or a single
task slot? Basically I mean if its going to be a parallel ope
Hi,
I have built the Prometheus reporter package from this PR
https://github.com/apache/flink/pull/4586, and used it on Flink 1.3.2 to
record every default metrics and those from `FlinkKafkaConsumer`.
Originally, everything was fine. I could get those metrics in TM from
Prometheus just like I saw
11 matches
Mail list logo