Cool, thanks! Those are great options and just what I was looking for
On Tue, Apr 9, 2019 at 12:42 AM Timothy Victor wrote:
> One approach I use is to write the git commit sha to the jars manifest
> while compiling it (I don't use semantic versioning but rather use commit
> sha).
>
> Then at run
Hi,
1. From doc[1], A Watermark(t) declares that event time has reached time t
in that stream, meaning that there should be no more elements from the
stream with a timestamp t’ <= t (i.e. events with timestamps older or equal
to the watermark). So I think it might be counterintuitive that generatin
Hi ,
Could you jstak the downstream Task (the Window) and have a look at what
the window operator is doing?
Best,
Guowei
Rahul Jain 于2019年4月10日周三 下午1:04写道:
> We are also seeing something very similar. Looks like a bug.
>
> It seems to get stuck in LocalBufferPool forever and the job has to be
>
We are also seeing something very similar. Looks like a bug.
It seems to get stuck in LocalBufferPool forever and the job has to be
restarted.
Is anyone else facing this too?
On Tue, Apr 9, 2019 at 9:04 PM Indraneel R wrote:
> Hi,
>
> We are trying to run a very simple flink pipeline, which is
Any input on this UI behavior ?
Thanks,
Jins
From: Timothy Victor
Date: Monday, April 8, 2019 at 10:47 AM
To: Jins George
Cc: user
Subject: Re: Flink 1.7.2 UI : Jobs removed from Completed Jobs section
I face the same issue in Flink 1.7.1.
Would be good to know a solution.
Tim
On Mon, Apr
Hi,
I have created a TimestampAssigner as follows.
I want to use monitoring.getEventTimestamp() with an Event Time processing
and collected aggregated stats over time window intervals of 5 secs, 5 mins
etc. Is this the right way to create the TimeWaterMarkAssigner with a bound
? I want to collect t
Hi Konstantin,
Thank you! This is exactly what I was looking for.
Thanks
Kevin
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Kevin,
there are no performance downsides to using Flink POJOs. You are just
limited in the types you can use (e.g. you can not use Collections).
In Flink 1.7 you might want to use Avro (SpecificRecord) for your state
objects to benefit from Flink's built-in state schema evolution
capabilities
Hi all,
I was looking at
https://ci.apache.org/projects/flink/flink-docs-stable/dev/types_serialization.html,
and https://www.youtube.com/watch?v=euFMWFDThiE to improve our state
management and back pressure.
Both of these resources mention ensuring objects used in the flow are valid
POJOs to avo
Hi, Andrey
I think ttl state has another scenario to simulate the slide window with the
process function. User can define a state to store the data with the latest
1 day. And trigger calculate on the state every 5min. It is a operator
similar to slidewindow. But i think it is more efficient than t
Thanks Till, I will start separate threads for the two issues we are
experiencing.
Cheers,
Bruno
On Mon, 8 Apr 2019 at 15:27, Till Rohrmann wrote:
> Hi Bruno,
>
> first of all good to hear that you could resolve some of the problems.
>
> Slots get removed if a TaskManager gets unregistered fro
Many thanks for your quick reply.
1) My implementation has no commits. All commits are done in
FlinkKafkaProducer class I envisage.
KeyedSerializationSchemaWrapper keyedSerializationSchemaWrapper = new
KeyedSerializationSchemaWrapper(new SimpleStringSchema());
new FlinkKafkaProducer("t
Hello There,
We are using flink sql to build a stream pipeline which reads data from kafka,
aggregates the data and finally sinks to elastic search.
For the table sink to elastic search, we expect to create index by day (e.g.
index1-2019-04-08, index1-2019-04-09…). Is this function supported?
Hi Shahar,
Thanks!
The approach of the UDAGG would be very manual. You could not reuse the
built-in functions.
There are several ways to achieve this. One approach could be to have a
map-based UDAGG for each type of aggregation that you'd like to support
(SUM, COUNT, ...)
Let's say we have a sumM
Is there a way to add a gauge to a flink serializer? I’d like to calculate
and expose the total time to process a given tuple including the
serialisation/deserialisation time.
Or would it be a better idea to wrap the conctrete sink function (e.g.
kafka producer) with an ‘instrumented sink’ adapter
+1 to drop it.
Stephan Ewen 于2019年4月6日周六 上午6:51写道:
> +1 to drop it
>
> Previously released versions are still available and compatible with newer
> Flink versions anyways.
>
> On Fri, Apr 5, 2019 at 2:12 PM Bowen Li wrote:
>
> > +1 for dropping elasticsearch 1 connector.
> >
> > On Wed, Apr 3,
I think so, I just wanted to bring it up again because the question was raised.
> On 8. Apr 2019, at 22:56, Elias Levy wrote:
>
> Hasn't this been always the end goal? It's certainly what we have been
> waiting on for job with very large TTLed state. Beyond timer storage,
> timer processing to
17 matches
Mail list logo