Hi,
I am using flink-1.2 and reading data stream from Kafka (using
FlinkKafkaConsumer08). I want to connect this data stream with another
stream (read control stream) so as to do some filtering on the fly. After
filtering, I am applying window function (tumbling/sliding event window)
along with fo
Thanks Fabian, I’m pretty sure you are correct here. I can see in the Metric
view that the currentLowWaterMark is set to MIN_VALUE by the looks of it, so
Watermarks are not being emitted at all until the end. This stays all the way
through the job.
I’m not sure why this is the case. I’ve ver
The documentation seems to indicate that there is a flatten method
available in the sql language interface (in the table of available
methods), or, alternatively using the '*' character somehow (in the text
above the table).
Yet I cannot flatten a POJO type, nor can I find any sufficient
documenta
Hi Max,
Belated response but this looks to be the same problem I am working to
solve in Gelly with graph data in FLINK-3695 [0]. These arrays allow for
object reuse. Interface is here [1]. Additional Value types are easy to add
but Long, Int, and String are most common to Gelly.
Suggestions are w
Hi Nico,
You might check Fabian's answer on a similar question I posted previousely
on the mailing list, it can be helpful :
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/BoundedOutOfOrdernessTimestampExtractor-and-allowedlateness-td9583.html
Best,
Yassine
On Mar 15, 2017 1
Hi,
I struggle a bit to understand the difference between
BoundedOutOfOrdernessTimestampExtractor and the allowed lateness function
of a window...
As I understand it, when I use BoundedOutOfOrdernessTimestampExtractor the
watermark is lagging behind the real event time of the stream with
maxOutOf
Thanks Till
It worked after updating the property in flink-conf.yaml file.
Thanks,
Kasif
From: Till Rohrmann [mailto:trohrm...@apache.org]
Sent: Wednesday, March 15, 2017 9:08 PM
To: user@flink.apache.org
Subject: Re: flink akka OversizedPayloadException error
Hi Kasif,
using the akka.frames
Hi Vinay!
Savepoints also call the same problematic RocksDB function, unfortunately.
We will have a fix next month. We either (1) get a patched RocksDB version
or we (2) implement a different pattern for ListState in Flink.
(1) would be the better solution, so we are waiting for a response from
Hi Stephan,
Thank you for making me aware of this.
Yes I am using a window without reduce function (Apply function). The
discussion happening on JIRA is exactly what I am observing, consistent
failure of checkpoints after some time and the stream halts.
We want to go live in next month, not sure
Hi,
I have a similar sounding use case and just yesterday was
experimenting with this approach:
Use 2 separate streams: one for model events, one for data events.
Connect these 2, key the resulting stream and then use a
RichCoFlatMapFunction to ensure that each data event is enriched with
the lat
Hi Kasif,
using the akka.framesize configuration option is the right way to solve the
problem with large akka messages. I’ve tested what you’ve described with
the latest 1.2 release branch and it worked for me.
In order to track down your problem I need some more information. First of
all you can
Hi @all,
I came back to this issue today...
@Robert:
"com/codahale/metrics/Metric" class was not available in the user code jar
Even after adding the metric class into the build-jar profile of the pom
file, more "class not found" errors occur. So the only solution was to add
the whole dependency
I've put it also on our Twitter account:
https://twitter.com/ApacheFlink/status/842015062667755521
On Wed, Mar 15, 2017 at 2:19 PM, Martin Neumann
wrote:
> I think this easier done in a straw poll than in an email conversation.
> I created one at: http://www.strawpoll.me/12535073
> (Note that yo
Hello all,
I've started thinking about online learning in Flink and one of the issues
that has come
up in other frameworks is the ability to prioritize "control" over "data"
events in iterations.
To set an example, say we develop an ML model, that ingests events in
parallel, performs
an aggregati
Hello,
We have added a serializer code which register all the schemas with
executionEnvironment and is called before execute(). This is to make sure that
all the avro schemas are pre-registered and cached in executors before actual
execution begin. Now while submitting the job we are getting
Thank you Fabian for you answer.
Best,
Yassine
On Mar 14, 2017 09:31, "Fabian Hueske" wrote:
> Hi Yassine,
>
> as far as I know, the processElement() and onTimer() methods are not
> concurrently called.
> This is definitely true for event-time timers (they are triggered by
> watermarks which ar
Hi all!
I would like to get a feeling how much Java 7 is still being used among
Flink users.
At some point, it would be great to drop Java 7 support and make use of
Java 8's new features, but first we would need to get a feeling how much
Java 7 is still used.
Would be happy if users on Java 7 re
17 matches
Mail list logo