Hi,
I am evaluating delta-flink "3.2.0" , and trying to write a
Datastream[My_Custom_Class]
Flink version "1.16.3" , it would be of great help if someone can share
some examples either in Java or Scala APIs.
Thanks and Regards,
Sneh
Jeff Zhang created FLINK-16969:
--
Summary: Unable to use case class in flink scala shell
Key: FLINK-16969
URL: https://issues.apache.org/jira/browse/FLINK-16969
Project: Flink
Issue Type: Bug
shijinkui created FLINK-5839:
Summary: Flink Security in Huawei's use case
Key: FLINK-5839
URL: https://issues.apache.org/jira/browse/FLINK-5839
Project: Flink
Issue Type: Improv
Hi Kevin,
I don't know what your entire program is doing but wouldn't be a
FlatMapFunction containing a state with your biggest value sufficient?
Your stream goes through your FlatMapper and compares with the last
saved biggest value. You can then emit something if the value has increased.
I
Hi all,
I am trying to keep track of the biggest value in a stream. I do this by
using the iterative step mechanism of Apache Flink. However, I get an
exception that checkpointing is not supported for iterative jobs. Why
can't this be enabled? My iterative stream is also quite small: only one
mantec. My team works on Multi-tenant Event
> Processing System. Just a high level background, our customers write data to
> kafka brokers though agents like logstash and we process the events and save
> the log data in Elastic Search and S3.
>
> Use Case: We have a use case wh
hope this helps.
Let us know what you think,
Kostas
> On Jul 26, 2016, at 11:51 AM, Maximilian Michels wrote:
>
> Hi Suma Cherukuri,
>
> Apache Flink can certainly serve your use case very well. Here's why:
>
> 1) Apache Flink has a connectors for Kafka and El
Hi Suma Cherukuri,
Apache Flink can certainly serve your use case very well. Here's why:
1) Apache Flink has a connectors for Kafka and ElasticSearch. It
supports reading and writing to the S3 file system.
2) Apache Flink includes a RollingSink which splits up data into files
w
>From the Use Case description, it seems like u r looking to aggregate files
based on either a threshold size or threshold time and ship them to S3.
Correct?
Flink might be an overkill here and u could look at frameworks like Apache
NiFi that have pre-built (and configurable) processors to
Hi,
Good Afternoon!
I work as an engineer at Symantec. My team works on Multi-tenant Event
Processing System. Just a high level background, our customers write data to
kafka brokers though agents like logstash and we process the events and save
the log data in Elastic Search and S3.
Use Case
Hi,
Good Afternoon!
I work as an engineer at Symantec. My team works on Multi-tenant Event
Processing System. Just a high level background, our customers write data to
kafka brokers though agents like logstash and we process the events and save
the log data in Elastic Search and S3.
Use Case
11 matches
Mail list logo