Hi Miki,

I haven’t tried mixing AsyncFunctions with SQL queries.

Normally I’d create a regular DataStream workflow that first reads from Kafka, 
then has an AsyncFunction to read from the SQL database.

If there are often duplicate keys in the Kafka-based stream, you could 
keyBy(key) before the AsyncFunction, and then cache the result of the SQL query.

— Ken

> On Apr 16, 2018, at 11:19 AM, miki haiat <miko5...@gmail.com> wrote:
> 
> HI thanks  for the reply  i will try to break your reply to the flow 
> execution order .
> 
> First data stream Will use AsyncIO and select the table ,
> Second stream will be kafka and the i can join the stream and map it ?
> 
> If that   the case  then i will select the table only once on load ?
> How can i make sure that my stream table is "fresh" .
> 
> Im thinking to myself , is thire a way to use flink backend (ROKSDB)  and 
> create read/write through 
> macanisem ?
> 
> Thanks 
> 
> miki
> 
> 
> 
> On Mon, Apr 16, 2018 at 2:45 AM, Ken Krugler <kkrugler_li...@transpac.com 
> <mailto:kkrugler_li...@transpac.com>> wrote:
> If the SQL data is all (or mostly all) needed to join against the data from 
> Kafka, then I might try a regular join.
> 
> Otherwise it sounds like you want to use an AsyncFunction to do ad hoc 
> queries (in parallel) against your SQL DB.
> 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/asyncio.html
>  
> <https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/asyncio.html>
> 
> — Ken
> 
> 
>> On Apr 15, 2018, at 12:15 PM, miki haiat <miko5...@gmail.com 
>> <mailto:miko5...@gmail.com>> wrote:
>> 
>> Hi,
>> 
>> I have a case of meta data enrichment and im wondering if my approach is the 
>> correct way .
>> input stream from kafka. 
>> MD in msSQL .
>> map to new pojo 
>> I need to extract  a key from the kafka stream   and use it to select some 
>> values from the sql table  .
>> 
>> SO i thought  to use  the table SQL api in order to select the table MD 
>> then convert the kafka stream to table and join the data by  the stream key .
>> 
>> At the end i need to map the joined data to a new POJO and send it to 
>> elesticserch .
>> 
>> Any suggestions or different ways to solve this use case ?
>> 
>> thanks,
>> Miki  
>> 
>> 
>> 
> 
> --------------------------
> Ken Krugler
> http://www.scaleunlimited.com <http://www.scaleunlimited.com/>
> custom big data solutions & training
> Hadoop, Cascading, Cassandra & Solr
> 
> 

--------------------------------------------
http://about.me/kkrugler
+1 530-210-6378

Reply via email to