Hi, Marco ~
It seems what you need is a temporal join from the SQL side, you can define 2
Flink tables for your PostgreSQL ones and join your Kafka stream with them
[1][3].
Flink 1.10 also supports this. There is some difference with the DDL compared
to 1.11 [2]
[1]
https://ci.apache.org/pro
d some more light for us here.
>
> Best regards
> Theo
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html
>
> <https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html>
>
> Von:
someone who has already used the API might shed some more light for us
>> here.
>>
>> Best regards
>> Theo
>>
>> [1]
>> https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html
>>
>> --
>> *Von: *"Marco Villalobos&
ht for us
> here.
>
> Best regards
> Theo
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html
>
> --
> *Von: *"Marco Villalobos"
> *An: *"Theo Diefenthal"
> *CC: *&
alobos"
An: "Theo Diefenthal"
CC: "user"
Gesendet: Donnerstag, 6. August 2020 23:47:13
Betreff: Re: Two Queries and a Kafka Topic
I am trying to use the State Processor API. Does that require HDFS or a
filesystem?
I wish there was a complete example that ties in bo
lalobos"
> *An: *"Leonard Xu"
> *CC: *"user"
> *Gesendet: *Mittwoch, 5. August 2020 04:33:23
> *Betreff: *Re: Two Queries and a Kafka Topic
>
> Hi Leonard,
>
> First, Thank you.
>
> I am currently trying to restrict my solution to Apache Fli
uent map steps.
>
> I think, option 3 is the easiest to be implemented while option 1 might be
> the most elegant way in my opinion.
>
> Best regards
> Theo
>
> Von: "Marco Villalobos"
> An: "Leonard Xu"
> CC: "user"
> Gesendet: Mitt
regards
Theo
Von: "Marco Villalobos"
An: "Leonard Xu"
CC: "user"
Gesendet: Mittwoch, 5. August 2020 04:33:23
Betreff: Re: Two Queries and a Kafka Topic
Hi Leonard,
First, Thank you.
I am currently trying to restrict my solution to Apache Flink 1.10 be
Hi Leonard,
First, Thank you.
I am currently trying to restrict my solution to Apache Flink 1.10 because its
the current version supported by Amazon EMR.
i am not ready to change our operational environment to solve this.
Second, I am using the DataStream API. The Kafka Topic is not in a table
Hi, Marco
> If I need SQL Query One and SQL Query Two to happen just one time,
Looks like you want to reuse this kafka table in one job, It’s supported to
execute multiple query in one sql job in Flink 1.11.
You can use `StatementSet`[1] to add SQL Query one and SQL query Two in a
single SQL j
Lets say that I have:
SQL Query One from data in PostgreSQL (200K records).
SQL Query Two from data in PostgreSQL (1000 records).
and Kafka Topic One.
Let's also say that main data from this Flink job arrives in Kafka Topic One.
If I need SQL Query One and SQL Query Two to happen just one time,
11 matches
Mail list logo