Hi everyone,
I have the following issue with Flink (0.10) and Kafka.
I am using a very simple TimestampExtractor like [1], which just
extracts a millis timestamp from a POJO. In my streaming job, I read in
these POJOs from Kafka using the FlinkKafkaConsumer082 like this:
stream = env.addSource(n
a dummy mapper after the Kafka source that just prints
> the element and forwards it? To see if the elements come with a good
> timestamp from Kafka.
>
> Cheers,
> Aljoscha
>> On 15 Nov 2015, at 22:55, Konstantin Knauf
>> wrote:
>>
>> Hi everyone,
>&
s, thereby
> finishing the whole program.
>
> Cheers,
> Aljoscha
>> On 16 Nov 2015, at 10:42, Gyula Fóra wrote:
>>
>> Could this part of the extractor be the problem Aljoscha?
>>
>> @Override
>> public long getCurrentWatermark() {
>>
a look at it? to aljoscha at
> apache.org.
>
> Cheers,
> Aljoscha
>> On 16 Nov 2015, at 13:05, Konstantin Knauf
>> wrote:
>>
>> Hi Aljoscha,
>>
>> ok, now I at least understand, why it works with fromElements(...). For
>> the rest I am not so sur
on calls). In this
> setting the watermarks are directly forwarded to operators without going
> through the logic I mentioned above.
>
> Cheers,
> Aljoscha
>> On 16 Nov 2015, at 18:13, Konstantin Knauf
>> wrote:
>>
>> Hi Aljoscha,
>>
>> I
;> given number of events have been received.
>>
>> Is it currently possible to do this with the current combination of window
>> assigners and triggers? I am happy to write custom triggers etc, but
>> wanted to make sure it wasn¹t already available before going down th
it
>> triggered, but instead to create a new window for it and have the old window
>> to fire and purge on event time timeout.
>>
>> Take a look and see if it will be useful -
>> https://bitbucket.org/snippets/vstoyak/o9Rqp
>>
>> Vladimir
>>
>>
lem with picking up the Hadoop config. Can you look
> into the logs to check whether the configuration is picked up? Change the log
> settings to DEBUG in log/log4j.properties for this. And can you provide the
> complete stack trace?
>
> – Ufuk
>
>
--
Konstantin Knauf * konsta
on.
> Sorry, I know that's not very intuitive, but in Hadoop the settings for
> in different files (hdfs|yarn|core)-site.xml.
>
>
> On Sat, Nov 21, 2015 at 12:48 PM, Konstantin Knauf
> mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi Ufuk,
>
>
recommend at least a Flink version for
> Hadoop 2.3.0
>
>
> On Sat, Nov 21, 2015 at 3:13 PM, Konstantin Knauf
> mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi Robert,
>
> thanks a lot, it's working now. Actually, it also says "directory"
e an issue with starting Flink 0.10.0 for Hadoop
> 2.7.0. We'll fix it with Flink 0.10.1.
> But if everything is working fine ... it might make sense not to change
> it now ("never change a running system").
>
>
> On Sat, Nov 21, 2015 at 3:24 PM, Konstantin Kna
Hi everyone,
me again :) Let's say you have a stream, and for every window and key
you compute some aggregate value, like this:
DataStream.keyBy(..)
.timeWindow(..)
.apply(...)
Now I want to get the maximum aggregate value for every window over the
keys. This feels like a pr
e window start and end time from the TimeWindow
> parameter of the WindowFunction and key the stream either by start or
> end time and apply a ReduceFunction on the keyed stream.
>
> Best, Fabian
>
> 2015-11-23 8:41 GMT+01:00 Konstantin Knauf <mailto:konstantin.kn...@tngtech
should do what you are looking for:
>
> DataStream
> .keyBy(_._1) // key by orginal key
> .timeWindow(..)
> .apply(...) // extract window end time: (origKey, time, agg)
> .keyBy(_._2) // key by time field
> .maxBy(_._3) // value
. Could you maybe
> send me example code (and example data if it is necessary to reproduce the
> problem.)? This would really help me pinpoint the problem.
>
> Cheers,
> Aljoscha
>> On 17 Nov 2015, at 21:42, Konstantin Knauf
>> wrote:
>>
>> Hi Aljoscha,
&g
; only for
parallelism 1, with "TimestampExtractor2" it works regardless of the
parallelism. Run from the IDE.
Let me know if you need anything else.
Cheers,
Konstantin
[1] https://gist.github.com/knaufk/d57b5c3c7db576f3350d
On 25.11.2015 21:15, Konstantin Knauf wrote:
> Hi Aljoscha
there a configuration option to disable this behaviour, such that
buffered events remaining in windows are just discarded?
In our application it is critical, that only events, which were
explicitly fired are emitted from the windows.
Cheers and thank you,
Konstantin
--
Konstantin Knauf
Hi everyone,
if a DataStream is created with .fromElements(...) all windows emit all
buffered records at the end of the stream. I have two questions about this:
1) Is this only the case for streams created with .fromElements() or
does this happen in any streaming application on shutdown?
2) Is t
This must be
> an oversight on our part. I’ll make sure that the 1.0 release will have the
> correct behavior.
>> On 17 Feb 2016, at 16:35, Konstantin Knauf
>> wrote:
>>
>> Hi everyone,
>>
>> if a DataStream is created with .fromElements(...) all win
would be a more natural behavior. This must be
> an oversight on our part. I’ll make sure that the 1.0 release will have the
> correct behavior.
>
>> On 17 Feb 2016, at 16:18, Konstantin Knauf
>> wrote:
>>
>> Hi everyone,
>>
>> if a DataStream is crea
t using collect.
> Eg:
> *DataSet> counts =* *data.flatMap(new
> Tokenizer());*
>
> I want a new DataSet containing 10 elements of *counts*.
>
> And, what would be the way to retrieve individual elements of DataSet
> without using list via collect?
>
>
> Best Rega
arting it with the configuration
> data for the individual job. Does this sound reasonable?
>
>
> If any of these questions are answered elsewhere I apologize. I couldn't
> find any of this being discussed elsewhere.
>
> Thanks for your help.
>
> David
--
Ko
perator. Do you know of any
> examples where it is doing something similar? Quickly looking I am not
> seeing it used anywhere outside of tests where it is largely just
> unifying the data coming in.
>
> I think accumulators will at least be a reasonable starting place for us
(basically just a timeout starting
with the first element in window instead of the last element in the window).
Cheers,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges
sands of timers
> for, say, time 15:30:03 it actually only saves one timer.
>
> I created a Jira Issue: https://issues.apache.org/jira/browse/FLINK-3669
>
> Cheers,
> Aljoscha
>
> On Thu, 24 Mar 2016 at 11:30 Konstantin Knauf
> mailto:konstantin.kn...@tngtech.com>> w
>>>>>
>>>>> From: Till Rohrmann [mailto:till.rohrm...@gmail.com
> <mailto:till.rohrm...@gmail.com>]
>>>>> Sent: mercredi 18 novembre 2015 18:01
>>>>> To: user@flink.apache.org <mailto:user@flink.apache.org>
>>>>> Subject: Re: YARN High Availabili
t; wrote:
>
> Hey Konstantin,
>
> just looked at the logs and the cluster is started, but the job is
> indeed never submitted.
>
> I've forwarded this to Robert, because he is familiar with the YARN
> client. I will look into how t
es, the checkpoint files are
usually not cleaned up. So some housekeeping might be necessary.
> Thanks,
> Zach
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.0/internals/stream_checkpointing.html
> [2] http://arxiv.org/abs/1506.08603
>
--
Konstantin Kna
>
> On Tue, Apr 5, 2016 at 8:54 PM, Konstantin Knauf
> wrote:
>> To my knowledge flink takes care of deleting old checkpoints (I think it
>> says so in the documentation about savepoints.). In my experience
>> though, if a job is cancelled or crashes, the checkpoint files
Hi everyone,
thanks to Robert, I found the problem.
I was setting "recovery.zookeeper.path.root" on the command line with
-yD. Apparently this is currently not supported. You need to set it the
parameter in flink-conf.yaml.
Cheers,
Konstantin
On 05.04.2016 12:52, Konstantin Knauf w
Hi everyone,
my experience with RocksDBStatebackend have left me a little bit
confused. Maybe you guys can confirm that my epxierence is the expected
behaviour ;):
I have run a "performancetest" twice, once with FsStateBackend and once
RocksDBStatebackend in comparison. In this particular test th
this in Flink?
>
>
> Thanks!
>
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
can go into details, I'm just writing this quickly before calling it a
> day. :-)
>
> Cheers,
> Aljoscha
>
> On Tue, 12 Apr 2016 at 18:21 Konstantin Knauf
> mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi everyone,
>
> my experience with Ro
mode?
Cheers,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
nt.max-retry-attempts: 5
So I would have expected a timeout of around 120,000ms. 50,000ms is our
configured akka.watch.heartbeat.interval. Is this value used instead here?
Cheers,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH,
ve.2336050.n4.nabble.com/How-to-measure-Flink-performance-tp6741p6863.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
>
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr.
.price, then show a message.
>
>
> Which is the (best) way of doing this? I am new using Flink and I am
> quite lost :)
>
>
> Thanks!
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Ge
Event 1 from website "Y" RIGHT NOW?
>
> JavaObjectY.price
>
>
> Compare both attributes
> Get a result depending on that comparison
>
> My java object doesn't have a timestamp, but I think I should use it right?
>
>
> Thanks!
>
>
>
>
make what I want clear, notice that the middle
> table doesn't need to be a table, it is just what I want and I don't
> have enough knowledge on Flink to know how to do it.
>
>
> Thanks for your time!
>
>
>
> 2016-05-26 20:33 GMT+02:00 K
d by the client when starting a yarn
session "./yarn-session.sh"
lo4j.properties: JobManager/Taskmanager Logs in Standalone-Mode. Not
used, when running on YARN.
Cheers,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting
e-flink-user-mailing-list-archive.2336050.n4.nabble.com/yarn-kill-container-due-to-running-beyond-physical-memory-limits-How-can-i-debug-memory-issue-tp7296p7317.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
>
--
Konstantin Knauf * ko
And
> yes, I know that we had multiple discussions like this in the past but I'm
> trying to gauge the current sentiment.
>
> I'm cross-posting to the user-ml since this is important for both users
> and developers.
>
> Best,
> Aljoscha
>
> [1] https://issues.apache.org/jira/browse/FLINK-17260
>
>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
t;>>>
>>>> > enableCheckpointing()
>>>>
>>>> > isForceCheckpointing()
>>>>
>>>> >
>>>>
>>>> > readFile(FileInputFormat inputFormat,String
>>>>
>>>> > filePath,FileProcessingMode watchType,long interval, FilePathFilter
>>>>
>>>> > filter)
>>>>
>>>> > readFileStream(...)
>>>>
>>>> >
>>>>
>>>> > socketTextStream(String hostname, int port, char delimiter, long
>>>> maxRetry)
>>>>
>>>> > socketTextStream(String hostname, int port, char delimiter)
>>>>
>>>> >
>>>>
>>>> > There are more, like the (get)/setNumberOfExecutionRetries() that were
>>>>
>>>> > deprecated long ago, but I have not investigated to see if they are
>>>>
>>>> > actually easy to remove.
>>>>
>>>> >
>>>>
>>>> > Cheers,
>>>>
>>>> > Kostas
>>>>
>>>> >
>>>>
>>>> > On Mon, Aug 17, 2020 at 10:53 AM Dawid Wysakowicz
>>>>
>>>> > wrote:
>>>>
>>>> >
>>>>
>>>> > Hi devs and users,
>>>>
>>>> >
>>>>
>>>> > I wanted to ask you what do you think about removing some of the
>>>> deprecated APIs around the DataStream API.
>>>>
>>>> >
>>>>
>>>> > The APIs I have in mind are:
>>>>
>>>> >
>>>>
>>>> > RuntimeContext#getAllAccumulators (deprecated in 0.10)
>>>>
>>>> > DataStream#fold and all related classes and methods such as
>>>> FoldFunction, FoldingState, FoldingStateDescriptor ... (deprecated in
>>>> 1.3/1.4)
>>>>
>>>> > StreamExecutionEnvironment#setStateBackend(AbstractStateBackend)
>>>> (deprecated in 1.5)
>>>>
>>>> > DataStream#split (deprecated in 1.8)
>>>>
>>>> > Methods in (Connected)DataStream that specify keys as either indices
>>>> or field names such as DataStream#keyBy, DataStream#partitionCustom,
>>>> ConnectedStream#keyBy, (deprecated in 1.11)
>>>>
>>>> >
>>>>
>>>> > I think the first three should be straightforward. They are long
>>>> deprecated. The getAccumulators method is not used very often in my
>>>> opinion. The same applies to the DataStream#fold which additionally is not
>>>> very performant. Lastly the setStateBackend has an alternative with a class
>>>> from the AbstractStateBackend hierarchy, therefore it will be still code
>>>> compatible. Moreover if we remove the
>>>> #setStateBackend(AbstractStateBackend) we will get rid off warnings users
>>>> have right now when setting a statebackend as the correct method cannot be
>>>> used without an explicit casting.
>>>>
>>>> >
>>>>
>>>> > As for the DataStream#split I know there were some objections against
>>>> removing the #split method in the past. I still believe the output tags can
>>>> replace the split method already.
>>>>
>>>> >
>>>>
>>>> > The only problem in the last set of methods I propose to remove is
>>>> that they were deprecated only in the last release and those method were
>>>> only partially deprecated. Moreover some of the methods were not deprecated
>>>> in ConnectedStreams. Nevertheless I'd still be inclined to remove the
>>>> methods in this release.
>>>>
>>>> >
>>>>
>>>> > Let me know what do you think about it.
>>>>
>>>> >
>>>>
>>>> > Best,
>>>>
>>>> >
>>>>
>>>> > Dawid
>>>
>>>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
s://www.flink-forward.org/global-2020/conference-program
[24]
https://www.eventbrite.com/e/flink-forward-global-virtual-2020-tickets-113775477516#tickets
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
g-list-archive.1008284.n3.nabble.com/ANNOUNCE-New-PMC-member-Dian-Fu-tp44170p44240.html
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
782&index=2
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
k-forward.org/global-2020
[13]
https://www.ververica.com/blog/a-deep-dive-on-change-data-capture-with-flink-sql-during-flink-forward
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
https://twitter.com/FlinkForward/status/1306219099475902464
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
t;>> favor of the relatively recently introduced StreamingFileSink.
> >>>
> >>> For the sake of a clean and more manageable codebase, I propose to
> >>> remove this module for release-1.12, but of course we should see first
> >>> if there are any u
Hi Robert,
+1 to the plan you outlined. If we were to drop support in Flink 1.13+, we
would still support it in Flink 1.12- with bug fixes for some time so that
users have time to move on.
It would certainly be very interesting to hear from current Flink on Mesos
users, on how they see the evolut
MC-member-Zhu-Zhu-tp45418p45474.html
[21]
https://flink.apache.org/2020/10/15/from-aligned-to-unaligned-checkpoints-part-1.html
[22]
https://flink.apache.org/news/2020/10/13/stateful-serverless-internals.html
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
ent-time-skewness-can-reduce-checkpoint-failures-and-task-manager-crashes
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
am into append
> records with change flag encoded.
> 2. Yes. It will replace records with the same key, i.e. upsert statement.
> 3. I think it will be available in one or two months. There will be a
> first release candidate soon.
> You can watch on the dev ML. I'm no
ime in 1.12, this will be
>>> available soon.
>>>
>>> Best,
>>> Leonard
>>>
>>>
>>
>> --
>> *Laurent Exsteens*
>> Data Engineer
>> (M) +32 (0) 486 20 48 36
>>
>> *EURA NOVA*
>> Rue Emile Francqui, 4
>> 1435 Mont-Saint-Guibert
>> (T) +32 10 75 02 00
>>
>>
>> *euranova.eu <http://euranova.eu/>*
>> *research.euranova.eu* <http://research.euranova.eu/>
>>
>> ♻ Be green, keep it on the screen
>>
>>
>>
>
> --
> *Laurent Exsteens*
> Data Engineer
> (M) +32 (0) 486 20 48 36
>
> *EURA NOVA*
>
> Rue Emile Francqui, 4
>
> 1435 Mont-Saint-Guibert
>
> (T) +32 10 75 02 00
>
> *euranova.eu <http://euranova.eu/>*
>
> *research.euranova.eu* <http://research.euranova.eu/>
>
> ♻ Be green, keep it on the screen
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
docker use cases to refer to this
>> new approach (mostly Kubernetes now).
>>
>> The first contributed version of Flink docker integration also contained
>> example and docs for the integration with Bluemix in IBM cloud. We also
>> suggest to maintain it outside of Flink r
423809/
[17] https://www.meetup.com/futureofdata-princeton/events/268830725/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Head of Product
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - Th
sonally believe all upcoming meetups in the
regions, I usually cover, will be cancelled. So, no update on this today.
[15]
https://flink.apache.org/ecosystem/2020/02/22/apache-beam-how-beam-runs-on-top-of-flink.html
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Head of Product
+49 160 9
ics
[10]
https://www.ververica.com/blog/how-openssl-in-ververica-platform-improves-your-flink-job-performance
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Head of Product
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forwar
the code snippet to store data into MongoDB.
>
> Thanks
> Siva
>
--
Konstantin Knauf | Head of Product
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stre
/22/Migrating+Flink%27s+CI+Infrastructure+from+Travis+CI+to+Azure+Pipelines
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Head of Product
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/>
.org/news/2020/04/01/community-update.html
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Head of Product
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conferen
tion-of-training-materials-into-Apache-Flink-tp40299.html
[24]
https://medium.com/@abdelkrim.hadjidj/event-driven-supply-chain-for-crisis-with-flinksql-be80cb3ad4f9
[25]
https://flink.apache.org/news/2020/04/15/flink-serialization-tuning-vol-1.html
[26] https://flink.apache.org/2020/04/09/pyflink-udf-support-flink.html
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf
can think of Low Level joins but not sure how do we know
> if it is stale data or not based on timestamp (watermark) as it can happen
> that a particular enriched record is not updated for 6 hrs.
>
> Regards,
> Vinay Patil
>
--
Konstantin Knauf
ploy on
>> EKS?
>> >
>> > From my understanding, with flink 1.10 running it on EKS will
>> > automatically scale up and down with kubernetes integration based on
>> the
>> > load. Is this correct? Do I have to do enable some configs to support
>
.apache.org/news/2020/04/24/release-1.9.3.html
>>>
>>> The full release notes are available in Jira:
>>> https://issues.apache.org/jira/projects/FLINK/versions/12346867
>>>
>>> We would like to thank all contributors of the Apache Flink community
pache.org/news/2020/04/21/memory-management-improvements-flink-1.10.html
[16] https://flink-packages.org/packages/flink-memory-calculator
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
window join, nor sliding window join and
>> interval join.
>>
>> Best Regards
>> Lec Ssmi
>>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
rward-virtual-2020-recap
[9]
https://www.youtube.com/watch?v=NF0hXZfUyqE&list=PLDX4T_cnKjD0ngnBSU-bYGfgVv17MiwA7
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
p.com/futureofdata-princeton/events/269933905/
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
020.
[9]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Add-a-material-web-page-under-https-flink-apache-org-tp41298.html
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
ink-to-run-real-time-streaming-pipelines
[9] https://www.infoq.com/presentations/ml-streaming-lyft/
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
[8]
[7] https://twitter.com/FlinkForward/status/1265281578676166658
[8] https://www.flink-forward.org/global-2020
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
1008284.n3.nabble.com/ANNOUNCE-New-Apache-Flink-Committer-Xintong-Song-tp42194p42207.html
[9] https://flink.apache.org/news/2020/06/11/community-update.html
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
lobal-2020/call-for-presentations
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
link-on-zeppelin-part2.html
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
troduction-to-flink-video-series
[12] https://www.youtube.com/watch?v=ZU1r7uEAO7o
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
treaming
>> > > applications.
>> > > > >>
>> > > > >> The release is available for download at:
>> > > > >> https://flink.apache.org/downloads.html
>> > > > >>
>> > > > >> Please check out the release blog post for an overview of the
>> > > improvements for this bugfix release:
>> > > > >> https://flink.apache.org/news/2020/07/21/release-1.11.1.html
>> > > > >>
>> > > > >> The full release notes are available in Jira:
>> > > > >>
>> > >
>> >
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12348323
>> > > > >>
>> > > > >> We would like to thank all contributors of the Apache Flink
>> > community
>> > > who made this release possible!
>> > > > >>
>> > > > >> Regards,
>> > > > >> Dian
>> > > > >
>> > > >
>> > >
>> >
>>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
rg/news/2020/07/14/application-mode.html
[12]
https://blogs.oracle.com/javamagazine/streaming-analytics-with-java-and-apache-flink
[13] https://www.youtube.com/watch?v=HWTb5kn4LvE
[14] https://www.flink-forward.org/global-2020/training-program
Cheers,
Konstantin
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
e to use it in K8 settings, performance considerations) or
> just lack of interest/support (in that case we may offer some help)?
>
>
> thanks,
>
> maciek
>
>
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
; >> management component. We are pretty heavy users of Mesos for scheduling
> >> workloads on our edge datacenters and we do want to continue to be able
> to
> >> run some of our Flink topologies (to compute machine learning short term
> >> features) on those D
at the moment, I can prepare a branch for you to
> experiment with, in the following days.
>
> Regarding to open tracing integration, I think the community can benefit a
> lot out of this,
> and definitely contributions are welcome!
>
> @Konstantin Knauf would you like to unde
d", "Deployment
/ Kubernetes", "Deployment / Mesos", "Deployment / YARN", flink-docker,
"Release System", "Runtime / Coordination", "Runtime / Metrics", "Runtime /
Queryable State", "Runtime / REST", Travis) AND resolution = Unresolved AND
labels in (stale-assigned) AND labels in (pull-request-available)
Cheers,
Konstantin
[1] https://github.com/apache/flink-jira-bot/blob/master/config.yaml
--
Konstantin Knauf
https://twitter.com/snntrable
https://github.com/knaufk
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>>
>
--
Konstantin Knauf | Head of Product
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream P
*Nico Kruber *published the second part of his series on Flink's network
stack. This time about metrics, monitoring and backpressure. (This slipped
through last week.) [10]
[10] https://flink.apache.org/2019/07/23/flink-network-stack-2.html
Cheers,
Konstantin (@snntrable)
--
Konstan
.nabble.com/ANNOUNCE-Hequn-becomes-a-Flink-committer-tp31378.html
[13] https://www.meetup.com/seattle-flink/events/263782233
[14]
https://www.eventbrite.com/e/apache-pulsar-meetup-beijing-tickets-67849484635
[15] https://www.meetup.com/acm-sf/events/263768407/
Cheers,
Konstantin (@snntrable)
-
p/events/262680261/
[33] https://www.meetup.com/Apache-Flink-London-Meetup/events/264123672/
Cheers,
Konstantin
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forwar
k-on-k8s-operator
Cheers,
Konstantin
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream Processing | Event Driven | Real Time
nk-on-yarn-with-kerberos-authentication-adeb62ef47d2
Cheers,
Konstantin
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream P
ow * Dean Shaw* and *Max McKittrick* will talk about click
stream analysis at scale at Capital One at the next dataCouncil.ai NYC Data
Engineering meetup. [13]
[12] https://europe-2019.flink-forward.org/
[13]
https://www.meetup.com/DataCouncil-AI-NYC-Data-Engineering-Science/events/264748638/
Cheers,
Kon
98dc2
[9] https://www.dynamicyield.com/blog/turning-messy-data-into-a-gold-mine/
[10] https://www.meetup.com/Bangalore-Apache-Kafka-Group/events/265285812/
[11] https://europe-2019.flink-forward.org
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
luding at least three Flink talks by *Timo Walter*
(Ververica), *Shashank Agarwal* (Razorpay) and *Rasyid Hakim* (GoJek). [6]
[6] https://www.meetup.com/Bangalore-Apache-Kafka-Group/events/265285812/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
p/events/265285812/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream Proce
a/events/265957761/
[13] https://www.meetup.com/Bangalore-Apache-Kafka-Group/events/265285812/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forw
h talks by *Heiko Udluft & Giuseppe Sirigu*, Airbus, and *Konstantin
Knauf* (on Stateful Functions). [15]
[10]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Becket-Qin-joins-the-Flink-PMC-tp34400p34452.html
[11] https://research.euranova.eu/flink-forward-the-key-tak
of November with talks by *Gyula Fora (Cloudera)* and *Lakshmi Rao
(Lyft)*.[10]
* We will have our next Apache Flink Meetup in Munich on November 27th
with talks by *Heiko Udluft & Giuseppe Sirigu*, Airbus, and *Konstantin
Knauf* (on Stateful Functions). [11]
[6]
http://apache-flink-maili
ber. [10]
* The next edition of the Bay Area Apache Flink meetup will happen on
the 20th of November with talks by *Gyula Fora (Cloudera)* and *Lakshmi Rao
(Lyft)*.[11]
* We will have our next Apache Flink Meetup in Munich on November 27th
with talks by *Heiko Udluft & Giuseppe Siri
irigu*, Airbus, and *Konstantin
Knauf* (on Stateful Functions). [10]
* There will be an introduction to Apache Flink, use cases and best
practices at the next Uber Engineering meeup in Toronto. If you live in
Toronto, its an excellent opportunity to get started with Flink or to meet
local Fl
] https://github.com/alibaba/Alink/blob/master/README.en-US.md
[7]
https://www.meetup.com/Chicago-Apache-Flink-Meetup-CHAF/events/266609828/
[8] https://www.meetup.com/Seoul-Apache-Flink-Meetup/events/266824815/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160
mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-Weekly-Community-Update-2019-48-td35423.html
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <
w.meetup.com/Apache-Flink-Meetup-Minsk/events/267134296/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
1 - 100 of 236 matches
Mail list logo