Hi,
I am experimenting Flink SQL by following
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sqlClient.html.
I want to set up an environment yaml to query Kafka data (json in avro
format). Where can I find the information below?
1. use GenericInMemoryCatalog (e.g. type, se
Hi, Flink Users
Assuming one flink job turns incremental checkpointing and restores from a
savepoint. It runs fine for a while and commits one checkpoint and then it
fully restarts because of one error. At this time, is it possible that the
job still needs files in the original savepoint for recov
First off, thanks for your reply!
I have an assumption that I should probably verify first:
When determining the source of the backpressure we look (in the WebUI) for the
first operator in our pipeline that is not showing backpressure. That’s what we
consider to be the source of the backpressure
Hi ,
I did not find out of box flink sink connector for http and SQS mechanism.
Has anyone implemented it?
Wanted to know if we are writing a custom sink function , whether it
would affect semantic exactly one guarantees ?
Thanks ,
Prasanna
Hi,
I have two questions:
1. In the first part of my pipeline using Flink DataStreams processing graph
edges, I'm filling up Hashmap. In it goes a vertex id and the partition this
vertex is assigned to. Later in my pipeline I want to query this Hashmap
again, to see in which partition exactly I c
Thanks Chesnay.
Flink UI port 8081 conflicts with Confluent schema registry, causing
cluster to fail to start. After changing the port, the cluster can start
and the job can be submitted. Thanks.
On Fri, Jul 17, 2020 at 12:40 AM Chesnay Schepler
wrote:
> Please double-check that the client and
Hi to all,
is there a way to write out Parquet-Avro data using BatchTableEnvironment
with Flink 1.11?
At the moment I'm using the hadoop ParquetOutputFormat but I hope to be
able to get rid of it sooner or later..I saw that there's the
AvroOutputFormat but no support for it using Parquet.
Best,
Fl
Hi
I was wondering whether there is any reasonably optimised DynamoDB Sink
I am surprised I only found some old, partial discussions about
implementing your own one.
Am I the only one with the requirement of sending output to DynamoDB?
Am I missing something obvious?
I am obviously looking for an
I have created a custom annotation to log time consumed by a method using
aspectj library.
I tested it in a spring boot application for one of the rest endpoint and
it works fine.
But when I annotate map method in my flink job class, it doesn't work.
Anyone having any inputs on it?
With regards
Backpressure is typically caused by something like one of these things:
* problems relating to i/o to external services (e.g., enrichment via an
API or database lookup, or a sink)
* data skew (e.g., a hot key)
* under-provisioning, or competition for resources
* spikes in traffic
* timer storms
I
Hi
While doing scale testing we observed that FSStatebackend is out
performing RocksDB.
When using RocksDB, off heap memory keeps growing over a period of time
and after a day pod got terminated with OOM.
Whereas the same data pattern FSStatebackend is running for days without
any memory spike an
You could approach testing this in the same way that Flink has implemented
its unit tests for KeyedBroadcastProcessFunctions, which is to use
a KeyedTwoInputStreamOperatorTestHarness with
a CoBroadcastWithKeyedOperator. To learn how to use Flink's test harnesses,
see [1], and for an example of test
Hi Prasanna ,
>From my experience, there is a ton of stuff which can slow down even a simple
>pipeline heavily. One thing directly coming to my mind: "object reuse" is not
>enabled. Even if you have a very simple pipeline with just 2 map steps or so,
>this can lead to a ton of unneceesary deep
If the rules can be implemented by examining events in isolation (e.g.,
temperature > 25), then the DataStream API is all you need. But if you want
rules that are looking for temporal patterns that play across multiple
events, then CEP or MATCH_RECOGNIZE (part of Flink SQL) will simplify
the implem
Hi to all,
I was trying to port another job we have that use dataset API to datastream.
The legacy program was doing basically a dataset.mapPartition().reduce() so
I tried to replicate this thing with a
final BasicTypeInfo columnType = BasicTypeInfo.DOUBLE_TYPE_INFO;
final DataStream input = en
Please double-check that the client and server are using the same Flink
version.
On 17/07/2020 02:42, Lian Jiang wrote:
Hi,
I am using java 1.8 and Flink 1.11 by following
https://ci.apache.org/projects/flink/flink-docs-release-1.11/try-flink/local_installation.html
on my MAC Mojave 10.14.6.
16 matches
Mail list logo