Nice work Peter! Looking forward to the fix.
@ChangZhou Kafka metrics are emitted from the source and the process
function would be a different operator. For the datastream API, you can set
`KafkaSourceOptions.REGISTER_KAFKA_CONSUMER_METRICS.key()` as `false` in
your consumer properties.
Best,
Ma
unsubscribe
On Wed, May 4, 2022 at 4:27 AM Nishant Gupta
wrote:
> Unsubscribe
>
Thank you Yun Gao, Till and Joe for driving this release. Your efforts are
greatly appreciated!
To everyone who has opened Jira tickets, provided PRs, reviewed code,
written documentation or anything contributed in any other way, this
release was (once again) made possible by you! Thank you.
Best
The Apache Flink community is very happy to announce the release of
Apache Flink 1.15.0, which is the first release for the Apache Flink
1.15 series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applicat
On Wed, May 04, 2022 at 01:53:01PM +0200, Chesnay Schepler wrote:
> Disabling the kafka metrics _should_ work.
Is there anyway to disable Kafka metrics when using low level process
function?
--
ChangZhuo Chen (陳昌倬) czchen@{czchen,debian}.org
http://czchen.info/
Key fingerprint = BA04 346D C2E1
It uses connectors to send data to external storages. It should be noted
that it shares the connector implementations between Java API and Python
API and so if you could find a Java connector, usually it could be also be
used in PyFlink.
For firehose, it has provided a firehose sink connector in F
Hi Dhavan,
Asyncio operator is still not supported in PyFlink.
Regards,
Dian
On Tue, May 3, 2022 at 3:48 PM Dhavan Vaidya
wrote:
> Hey Francis!
>
> Thanks for the insights! I am thinking of using Java / Scala for this
> scenario given your input. Introducing a new language to the team, however
Thanks Meissner Dylan for the suggestion. I have created a ticket [1] to
track this requirement.
[1]. https://issues.apache.org/jira/browse/FLINK-27491
Best,
Yang
Francis Conroy 于2022年5月5日周四 06:06写道:
> Hi all,
>
> Thanks for looking into this. Yeah, I kept trying different variations of
>
Hi all,
Thanks for looking into this. Yeah, I kept trying different variations of
the replacement fields with no success. I'm trying to use the .getenv()
technique now but our cluster is having problems and I haven't been able to
reinstall the operator.
I'll reply once it's all working.
Thanks,
F
Have you tried MirrorMaker 2's consumer offset translation feature? I have
not used this myself, but it sounds like what you are looking for!
https://issues.apache.org/jira/browse/KAFKA-9076
https://kafka.apache.org/26/javadoc/org/apache/kafka/connect/mirror/Checkpoint.html
https://strimzi.io/blog
Thank you for the suggestions, guys!
@Austin Cawley-Edwards
Your idea is spot on! This approach would surely work. We could take a
savepoint of each of our apps, load it using state processor apis and
create another savepoint accounting for the delta on the offsets, and start
the app on the new cl
Hi John,
In an ideal scenario you would be able to leverage Flink's backpressure
mechanism. That would effectively slow down the processing until the reason
for backpressure has been resolved. However, given that indexing happens
after you've sinked your result, from a Flink perspective, the actio
So I know specifically, it's the indexing and I put setQueryTimeout. So the
job fails. And goes into retry. That's fine.
But just wondering is there a way to pause the stream at a specified
time/checkpoint and then resume after a specified time?
On Wed, May 4, 2022 at 10:23 AM Martijn Visser
wro
Flink deployment resources support env interpolation natively using $() syntax.
I expected this to "just work" like other resources when using the operator,
but it does not.
https://kubernetes.io/docs/tasks/inject-data-application/_print/#use-environment-variables-to-define-arguments
job:
jar
Hi Kevin,
I'm hoping that @Thomas Weise could help with the issue
regarding the recovery from the savepoint.
Best regards,
Martijn
On Wed, 4 May 2022 at 17:05, Kevin Lam wrote:
> Following up on this, is there a good way to debug restoring from
> savepoints locally? We currently have a set-u
Following up on this, is there a good way to debug restoring from
savepoints locally? We currently have a set-up where we use IntelliJ to run
and test our pipelines locally, but would like an API to be able to specify
the savepoint to restore from, without needing to spin up a full cluster.
In int
Hi John,
It is generic, but each database has its own dialect implementation because
they all have their differences unfortunately :)
I wish I knew how I could help you out here. Perhaps some of the JDBC
maintainers could chip in.
Best regards,
Martijn
On Sun, 1 May 2022 at 04:06, John Smith
Ah that's unfortunate. Yeah the feature freeze was quite a bit earlier
than I remembered :(
On 04/05/2022 15:31, Peter Schrott wrote:
Hi Chesnay,
Thanks again for the hints.
Unfortunately the metrics filtering feature is not part of 1.15.0. It
seems to be part of 1.16.0:
https://issues.apac
Hi Chesnay,
Thanks again for the hints.
Unfortunately the metrics filtering feature is not part of 1.15.0. It seems
to be part of 1.16.0: https://issues.apache.org/jira/browse/FLINK-21585
I was already wondering why I could not find the feature in the docs you
linked.
> Disabling the kafka metri
Disabling the kafka metrics _should_ work.
Alternatively you could use the new generic feature to filter metrics:
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/metric_reporters/#filter-excludes
metrics.reporter..filter.excludes:
*KafkaProducer*;*KafkaConsumer*
Th
Allright! Thanks!
I tried to dig a bit deeper and see if there is any workaround for that
problem. I tried to switch off reporting the Kafka metrics, but I was not
quite successful. I am using the table api Kafka connector.
Do you have any suggestions on how to overcome this?
Could you also prov
https://issues.apache.org/jira/browse/FLINK-27487
On 04/05/2022 13:22, Chesnay Schepler wrote:
Yes, that looks like a new bug in 1.15.
The migration to the new non-deprecated Kafka API in the
KafkaMetricMutableWrapper was done incorrectly.
This should affect every job that uses the new kafka
Unsubscribe
Yes, that looks like a new bug in 1.15.
The migration to the new non-deprecated Kafka API in the
KafkaMetricMutableWrapper was done incorrectly.
This should affect every job that uses the new kafka connector.
Thank you for debugging the issue!
I will create a ticket.
On 04/05/2022 12:24, Pet
As the stracktrace says, class cast exception occurs here:
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/metrics/KafkaMetricMutableWrapper.java#L37
I found the following metrics to be affected
Sorry for the spamming!
Just after jumping into the debug-session I noticed that there are indeed
exceptions thrown when fetching the metrics on port 9200:
13657 INFO [ScalaTest-run] com.sun.net.httpserver - HttpServer
created http 0.0.0.0/0.0.0.0:9200
13658 INFO [ScalaTest-run] com.sun.net.ht
Hi Chesnay,
Thanks for that support! Just for compilation: Running the "Problem-Job"
locally as test in Intellij (as Chesney suggested above) reproduces the
described problem:
➜ ~ curl localhost:9200curl: (52) Empty reply from server
Doing the same with other jobs metrics are available on local
Hello Hemanga,
MirrorMaker can cause havoc in many respects, for one, it does not have strict
exactly-once.semantics…
The way I would tackle this problem (and have done in similar situaltions):
* For the source topics that need to be have exactly-once-semantics and
that are not intrinsica
28 matches
Mail list logo