Thank you Fabian! We will try the approach that you suggest.
On Thu, Jun 6, 2019 at 1:03 AM Fabian Hueske wrote:
> Hi Yu,
>
> When you register a DataStream as a Table, you can create a new attribute
> that contains the event timestamp of the DataStream records.
> For that, you would need to as
HI,
I have a class defined :
public class MGroupingWindowAggregate implements AggregateFunction.. {
> private final Map keyHistMap = new TreeMap<>();
> }
>
In the constructor, I initialize it.
> public MGroupingWindowAggregate() {
> Histogram minHist = new Histogram(new
> SlidingTimeWindowReservo
Hi,
I am using 1.7.1 and we store checkpoints in Ceph and we use
flink-s3-fs-hadoop-1.7.1 to connect to Ceph. I have only 1 checkpoint
retained. Issue I see is that previous/old chk- directories are still
around. I verified that those older doesn't contain any checkpoint data. But
the directories
Hi guys,
May i know flink support ipv6?
Thanks
Yow
Thanks Fabian. This is really helpful.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Great, thanks!
From: Fabian Hueske [mailto:fhue...@gmail.com]
Sent: Thursday, June 6, 2019 3:07 PM
To: Smirnov Sergey Vladimirovich
Cc: user@flink.apache.org
Subject: Re: Change sink topology
Hi Sergey,
I would not consider this to be a topology change (the sink operator would
still be a Kafka
Hi Sergey,
I would not consider this to be a topology change (the sink operator would
still be a Kafka producer).
It seems that dynamic topic selection is possible with a
KeyedSerializationSchema (Check "Advanced Serialization Schema: [1]).
Best,
Fabian
[1]
https://ci.apache.org/projects/flink/f
Hi flink,
Im wonder, is it possible to dynamically (while job running) change sink
topology* - by adding new sink on the fly?
Say, we have input stream and by analyzing message property we decided to put
this message into some kafka topic, i.e. choosen_topic =
function(message.property).
Simpli
Hi Ben,
Flink correctly maintains the offsets of all partitions that are read by a
Kafka consumer.
A checkpoint is only complete when all functions successful checkpoint
their state. For a Kafka consumer, this state is the current reading offset.
In case of a failure the offsets and the state of a
Hi,
There are a few things to point out about your example:
1. The the CoFlatMapFunction is probably executed in parallel. The
configuration is only applied to one of the parallel function instances.
You probably want to broadcast the configuration changes to all function
instances. Have a look a
Hi guys,
I want to merge 2 diffrent stream, one is config stream and the other is the
value json, to check again that config. Its seem like the CoFlatMapFunction
should be used.
Here my sample:
val filterStream: ConnectedStreams[ControlEvent,
JsValue]=(specificControlStream).connect(eventS
Hi Yu,
When you register a DataStream as a Table, you can create a new attribute
that contains the event timestamp of the DataStream records.
For that, you would need to assign timestamps and generate watermarks
before registering the stream:
FlinkKafkaConsumer kafkaConsumer =
new FlinkKa
12 matches
Mail list logo