How exactly does Idle-state retention policy work?

2019-09-17 Thread srikanth flink
Hi there, I'm using FlinkSQL to solve to do the job for me. Based on this , configured the idle-state milliseconds. *context*: FlinkSQL reads Kafka stream with no key and put to dynamic table

Approach to match join streams to create unique streams.

2019-09-23 Thread srikanth flink
Hi there, I've two streams source Kafka. Stream1 is a continuous data and stream2 is a periodic update. Stream2 contains only one column. *Use case*: Every entry from stream1 should verify if the stream2 has any match. The matched and unmatched records should be separated into new unique streams

Can I cross talk between environments

2019-09-23 Thread srikanth flink
Hi, I'm using Java code to source from Kafka, streaming to table and registered the table. I understand that I have started the StreamExecutionEnvironment and execution. Is there a way that I could access the registered table/temporal function from SQL client? Thanks Srikanth

Re: Approach to match join streams to create unique streams.

2019-09-24 Thread srikanth flink
t is not on the > immediate roadmap AFAIK. > > If you need the inverse, I'd recommend to implement the logic in a > DataStream program with a KeyedCoProcessFunction. > > Best, Fabian > > Am Mo., 23. Sept. 2019 um 13:04 Uhr schrieb srikanth flink < > flink.d...@gmail.com>

How do I create a temporal function using Flink Clinet SQL?

2019-09-24 Thread srikanth flink
Hi, I'm running time based joins, dynamic table over temporal function. Is there a way I could create temporal table using flink SQL. And I'm using v1.9. Thanks Srikanth

Explain time based windows

2019-09-24 Thread srikanth flink
Hi, I'm trying to join a dynamic table and static(periodic batch update) table using: SELECT K.* FROM KafkaSource AS K, BadIP AS B WHERE K.sourceip = B.bad_ip AND B.b_proctime BETWEEN K.k_proctime - INTERVAL '65' MINUTE AND K.k_proctime + INTERVAL '5' MINUTE. Note, KafkaSource is a dynamic table,

Flink SQL update-mode set to retract in env file.

2019-09-25 Thread srikanth flink
How could I configure environment file for Flink SQL, update-mode: retract? I have this for append: properties: - key: zookeeper.connect value: localhost:2181 - key: bootstrap.servers value: localhost:9092 - key: group.id value: reconMultiAttem

Re: Flink SQL update-mode set to retract in env file.

2019-09-26 Thread srikanth flink
nto one kind of > RetractStreamTableSink. > Hope it helps you ~ > > > > Best, > Terry Wang > > > > 在 2019年9月26日,下午2:50,srikanth flink 写道: > > How could I configure environment file for Flink SQL, update-mode: retract? > > I have this for append:

Re: Flink SQL update-mode set to retract in env file.

2019-09-26 Thread srikanth flink
h tableApi, > And the following code `outStreamAgg.addSink(…)` is just a normall stream > write to a FlinkKafka sink function. > Your program code is a mixture of table api and dataStream programing not > just single Table API. > > Best, > Terry Wang > > > > 在 2019年9月26日,下午

Reading Key-Value from Kafka | Eviction policy.

2019-09-26 Thread srikanth flink
Hi, My data source is Kafka, all these days have been reading the values from Kafka stream to a table. The table just grows and runs into a heap issue. Came across the eviction policy that works on only keys, right? Have researched to configure the environment file(Flink SLQ) to read both key an

Re: Reading Key-Value from Kafka | Eviction policy.

2019-09-26 Thread srikanth flink
, 2019 at 11:11 AM miki haiat wrote: > I'm sure there is several ways to implement it. Can you elaborate more on > your use case ? > > On Fri, Sep 27, 2019, 08:37 srikanth flink wrote: > >> Hi, >> >> My data source is Kafka, all these days have been reading the

Querying nested JSON stream?

2019-10-17 Thread srikanth flink
Hi there, I'm using Flink SQL clinet to run the jobs for me. My stream is a JSON with nested objects. Couldn't find much document on querying the nested JSON, so I had to flatten the JSON and use as: SELECT `source.ip`, `destination.ip`, `dns.query`, `organization.id`, `dns.answers.data` FROM sour

Can a Flink query outputs nested json?

2019-10-24 Thread srikanth flink
I'm working on Flink SQL client. Input data is json format and contains nested json. I'm trying to query the nested json from the table and expecting the output to be nested json instead of string. I've build the environment file to define a table schema as: > format: > type: json >

Add custom fields into Json

2019-10-28 Thread srikanth flink
Hi there, I'm querying json data and is working fine. I would like to add custom fields including the query result. My query looks like: select ROW(`source`), ROW(`destination`), ROW(`dns`), organization, cnt from (select (source.`ip`,source.`isInternalIP`) as source, (destination.`ip`,destinatio

Using RocksDB as lookup source in Flink

2019-11-04 Thread srikanth flink
Hi there, Can someone help me implement Flink source Kafka to Flink Sink RocksDB, while I could use UDF for lookup RocksDB in SQL queries? Context: I get a list of IPaddresses in a stream which I wish to store in RocksDB. Therefore the other stream perform a lookup to match the IPaddress. Thank

What is the slot vs cpu ratio?

2019-11-06 Thread srikanth flink
Hi there, I've 3 node cluster with 16cores each. How many slots could I utilize at max and how to I do the calculation? Thanks Srikanth

Job Distribution Strategy On Cluster.

2019-11-06 Thread srikanth flink
Hi there, I'm running Flink with 3 node cluster. While running my jobs(both SQL client and jar submission), the jobs are being assigned to single machine instead of distribution among the cluster. How could I achieve the job distribution to make use of the computation power? Thanks Srikanth

Re: Job Distribution Strategy On Cluster.

2019-11-11 Thread srikanth flink
parallelism is 1, it would just use a single slot. > > Best, > Vino > > srikanth flink 于2019年11月6日周三 下午10:03写道: > >> Hi there, >> >> I'm running Flink with 3 node cluster. >> While running my jobs(both SQL client and jar submission), the jobs are >&g

Re: Job Distribution Strategy On Cluster.

2019-11-11 Thread srikanth flink
me scenes, many operators are chained with each other. if it's >> parallelism is 1, it would just use a single slot. >> >> Best, >> Vino >> >> srikanth flink 于2019年11月6日周三 下午10:03写道: >> >>> Hi there, >>> >>> I'm runnin

Re: Job Distribution Strategy On Cluster.

2019-11-11 Thread srikanth flink
Great, thanks for the update. On Tue, Nov 12, 2019 at 8:51 AM Zhu Zhu wrote: > There is no plan for release 1.9.2 yet. > Flink 1.10.0 is planned to be released in early January. > > Thanks, > Zhu Zhu > > srikanth flink 于2019年11月11日周一 下午9:53写道: > >> Zhu Zhu, >&g

Table/SQL API to read and parse JSON, Java.

2019-12-01 Thread srikanth flink
Hi there, I'm following the link to read JSON data from Kafka and convert to table, programmatically. I'd try and succeed declarative using SQL client. My Json data is nested like: {a:1,b,2,c:{x:1,y:2}}. Code: >

Row arity of from does not match serializers.

2019-12-05 Thread srikanth flink
My Flink job does reading from Kafka stream and does some processing. Code snippet: > DataStream flatternedDnsStream = filteredNonNullStream.rebalance() > .map(node -> { > JsonNode source = node.path("source"); > JsonNode destination = node.path("destination"); > JsonNode dns = node.path("dns");