Re: Flink Operator in Golang?

2022-11-17 Thread kant kodali
Golang doesn't seem to have anything similar to Flink or Spark. On Thu, Nov 17, 2022 at 8:11 PM Mark Lee wrote: > I got it, Thanks Zhanghao! > > > > *发件人:* user-return-51640-lifuqiong00=126@flink.apache.org > *代表 * > zhanghao.c...@outlook.com > *发送时间:* 2022年11月17日 23:36 > *收件人:* Mark Lee ;

Re: what is the difference between map vs process on a datastream?

2020-03-17 Thread kant kodali
lar to map, but with a Collector that can be used to > emit zero, one, or many events in response to each event, just like a > process function. > > David > > > On Tue, Mar 17, 2020 at 11:50 AM kant kodali wrote: > >> what is the difference between map vs process on a datastream? they look >> very similar. >> >> Thanks! >> >>

Re: Apache Airflow - Question about checkpointing and re-run a job

2020-03-17 Thread kant kodali
Does Airflow has a Flink Operator? I am not seeing it? Can you please point me? On Mon, Nov 18, 2019 at 3:10 AM M Singh wrote: > Thanks Congxian for your answer and reference. Mans > > On Sunday, November 17, 2019, 08:59:16 PM EST, Congxian Qiu < > qcx978132...@gmail.com> wrote: > > > Hi > Yes,

what is the difference between map vs process on a datastream?

2020-03-17 Thread kant kodali
what is the difference between map vs process on a datastream? they look very similar. Thanks!

a question on window trigger and delta output

2020-03-15 Thread kant kodali
Hi All, I set a transformation like this and my events in the stream have a sequential timestamp like 1,2,3, and I set the watermark to event time. myStream .keyBy(0) .timeWindow(Time.of(1000, TimeUnit.MILLISECONDS)) .aggregate(new myAggregateFunction()) .print(

Re: Do I need to set assignTimestampsAndWatermarks if I set my time characteristic to IngestionTime?

2020-03-10 Thread kant kodali
nt and automatic watermark generation. >>> >> >> So it's neither possible to assign timestamps nor watermark, but it seems >> as if the default behavior is exactly as you want it to be. If that doesn't >> work for you, could you please rephrase your last ques

Do I need to set assignTimestampsAndWatermarks if I set my time characteristic to IngestionTime?

2020-03-09 Thread kant kodali
Hi All, Do I need to set assignTimestampsAndWatermarks if I set my time characteristic to IngestionTime? say I set my time characteristic of stream execution environment to Ingestion time as follows streamExecutionEnvironment.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime); do I n

How does custom object/data structures get mapped into rocksdb underneath?

2020-03-09 Thread kant kodali
Hi All, I want to do stateful streaming and I was wondering how Custom objects get mapped into rocksdb? say I have the following class that represents my state public class MyState { private HashMap map1 ; // T can be any type private HashMap map2; // S can be any type } I wonder how th

Re: How to print the aggregated state everytime it is updated?

2020-03-06 Thread kant kodali
ime. then > you should call `DataStream.assignTimestampsAndWatermarks` to set the > timestamp and watermark. > Window is triggered when the watermark exceed the window end time > > Best, > Congxian > > > kant kodali 于2020年3月4日周三 上午5:11写道: > >> Hi All, >> >&

How to print the aggregated state everytime it is updated?

2020-03-03 Thread kant kodali
Hi All, I have a custom aggregated state that is represent by Set and I have a stream of values coming in from Kafka where I inspect, compute the custom aggregation and store it in Set. Now, I am trying to figureout how do I print the updated value everytime this state is updated? Imagine I have

Re: java.util.concurrent.ExecutionException

2020-03-03 Thread kant kodali
Hi Gary, This has to do with my Kafka. After restarting Kafka it seems to work fine! Thanks! On Tue, Mar 3, 2020 at 8:18 AM kant kodali wrote: > The program finished with the following exception: > > > org.apache.flink.client.program.ProgramInvocationException: The main > m

Re: java.util.concurrent.ExecutionException

2020-03-03 Thread kant kodali
Hi, > > Can you post the complete stacktrace? > > Best, > Gary > > On Tue, Mar 3, 2020 at 1:08 PM kant kodali wrote: > >> Hi All, >> >> I am just trying to read edges which has the following format in Kafka >> >> 1,2 >> 1,3 >> 1,

zookeeper.connect is not needed but Flink requires it

2020-03-03 Thread kant kodali
Hi All, The zookeeper.connect is not needed for KafkaConsumer or KafkaAdminClient however Flink requires it. You can also see in the Flink TaskManager logs the KafkaConsumer is not recognizing this property anyways. bsTableEnv.connect( new Kafka() .property("bootstrap.servers", "local

java.util.concurrent.ExecutionException

2020-03-03 Thread kant kodali
Hi All, I am just trying to read edges which has the following format in Kafka 1,2 1,3 1,5 using the Table API and then converting to DataStream of Edge Objects and printing them. However I am getting java.util.concurrent.ExecutionException but not sure why? Here is the sample code import org.

How is state stored in rocksdb?

2020-03-02 Thread kant kodali
Hi All, I am wondering how Flink serializes and deserializes state from rockdb? What is the format used? For example, say I am doing some stateful streaming and say an object for my class below represents a state. how does Flink serializes and deserializes the object of MyClass below? is it just

Re: Do I need to use Table API or DataStream API to construct sources?

2020-03-02 Thread kant kodali
ql-connector-kafka_2.11:${flinkVersion}" > > > On Sun, Mar 1, 2020 at 7:31 PM kant kodali wrote: > >> * What went wrong: >> Could not determine the dependencies of task ':shadowJar'. >> > Could not resolve all dependencies for configuration ':flinkShadowJar

Re: Do I need to use Table API or DataStream API to construct sources?

2020-03-01 Thread kant kodali
iscussion how to implement such a sink for *both *batch and streaming > here: > https://issues.apache.org/jira/browse/FLINK-14807?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&focusedCommentId=17046455#comment-17046455 > > Best, > > Dawid > On 01/03/2020 12:00, k

Re: Do I need to use Table API or DataStream API to construct sources?

2020-03-01 Thread kant kodali
ons = [project.configurations.flinkShadowJar] mergeServiceFiles() manifest { attributes 'Main-Class': mainClassName } } On Sun, Mar 1, 2020 at 1:38 AM Benchao Li wrote: > I don't know how gradle works, but in Maven, packaging dependencies into > one

Re: Do I need to use Table API or DataStream API to construct sources?

2020-03-01 Thread kant kodali
uot;select ..."); > List result = TableUtils.collectToList(table); > result. > > currently, we are planner to implement Table#collect[1], after > that Table#head and Table#print may be also introduced soon. > > > The progra

Re: Do I need to use Table API or DataStream API to construct sources?

2020-02-29 Thread kant kodali
Dawid know what’s going on here? > > Piotrek > > [1] > https://ci.apache.org/projects/flink/flink-docs-release-1.10/api/java/org/apache/flink/table/api/TableEnvironment.html#registerTableSink-java.lang.String-org.apache.flink.table.sinks.TableSink- > > On 1 Mar 2020, at 07:50, ka

Re: Do I need to use Table API or DataStream API to construct sources?

2020-02-29 Thread kant kodali
mat(new Csv()) .withSchema(new Schema().field("f0", DataTypes.STRING())) .inAppendMode() .createTemporaryTable("kafka_target"); tableEnvironment.insertInto("kafka_target", resultTable); tableEnvironment.execute("Sam

Re: Do I need to use Table API or DataStream API to construct sources?

2020-02-29 Thread kant kodali
ery useful, and I've created an > issue[2] to track this. > > [1] > https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sourceSinks.html#define-a-tablesink > [2] https://issues.apache.org/jira/browse/FLINK-16354 > > kant kodali 于2020年3月1日周日 上午2:30写道:

Is CSV format supported for Kafka in Flink 1.10?

2020-02-29 Thread kant kodali
Hi, Is CSV format supported for Kafka in Flink 1.10? It says I need to specify connector.type as Filesystem but documentation says it is supported for Kafka? import org.apache.flink.contrib.streaming.state.RocksDBStateBackend; import org.apache.flink.runtime.state.StateBackend; import org.apache.

Re: Do I need to use Table API or DataStream API to construct sources?

2020-02-29 Thread kant kodali
.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html > [2] > https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/connect.html#overview > [3] > https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/connect.html#kafka-connector > > On 29

Re: Do I need to use Table API or DataStream API to construct sources?

2020-02-29 Thread kant kodali
Also why do I need to convert to DataStream to print the rows of a table? Why not have a print method in the Table itself? On Sat, Feb 29, 2020 at 3:40 AM kant kodali wrote: > Hi All, > > Do I need to use DataStream API or Table API to construct sources? I am > just trying to rea

Do I need to use Table API or DataStream API to construct sources?

2020-02-29 Thread kant kodali
Hi All, Do I need to use DataStream API or Table API to construct sources? I am just trying to read from Kafka and print it to console. And yes I tried it with datastreams and it works fine but I want to do it using Table related APIs. I don't see any documentation or a sample on how to create Kaf

Re: The main method caused an error: Unable to instantiate java compiler in Flink 1.10

2020-02-28 Thread kant kodali
est.runtimeClasspath += configurations.flinkShadowJar javadoc.classpath += configurations.flinkShadowJar } run.classpath = sourceSets.main.runtimeClasspath jar { manifest { attributes 'Built-By': System.getProperty('user.name'), 'Build-Jdk&

Re: Flink 1.10 exception : Unable to instantiate java compiler

2020-02-27 Thread kant kodali
Same problem! On Thu, Feb 27, 2020 at 11:10 PM LakeShen wrote: > Hi community, > now I am using the flink 1.10 to run the flink task > ,cluster type is yarn . I use commandline to submit my flink job , the > commandline just like this : > > flink run -m yarn-cluster --allowNonRe

Re: The main method caused an error: Unable to instantiate java compiler in Flink 1.10

2020-02-27 Thread kant kodali
; As Jark said, > Your user jar should not contains " > org.codehaus.commons.compiler.ICompilerFactory" dependencies. This will > make calcite can not work. > > In 1.10, have made Flink client respect classloading policy that default > policy is child first [1]. More det

Re: The main method caused an error: Unable to instantiate java compiler in Flink 1.10

2020-02-27 Thread kant kodali
It works within IDE but not when I submit using command using flink run myApp.jar On Thu, Feb 27, 2020 at 3:32 PM kant kodali wrote: > Below is the sample code using Flink 1.10 > > public class Test { > > public static void main(String... args) t

Re: The main method caused an error: Unable to instantiate java compiler in Flink 1.10

2020-02-27 Thread kant kodali
Query("SELECT * FROM sample1 INNER JOIN sample2 on sample1.f0=sample2.f0"); result.printSchema(); bsTableEnv.toRetractStream(result, Row.class).print(); bsTableEnv.execute("sample job"); } } On Thu, Feb 27, 2020 at 3:22 PM kant kodali wrote: > Fixed the typo.

Re: The main method caused an error: Unable to instantiate java compiler in Flink 1.10

2020-02-27 Thread kant kodali
Fixed the typo. Hi All, My sample program works in Flink 1.9 but in 1.10 I get the following error when I am submitting the job. otherwords it fails to submit a job. any idea? Thanks! On Thu, Feb 27, 2020 at 2:19 PM kant kodali wrote: > Hi All, > > > My sample program works

The main method caused an error: Unable to instantiate java compiler in Flink 1.10

2020-02-27 Thread kant kodali
Hi All, My sample program works in Flink 1.9 but in 1.0 I get the following error when I am submitting the job. otherwords it fails to submit a job. any idea? Thanks! org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Unable to instantiate java c

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-27 Thread kant kodali
https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_catalog.html#hivecatalog Can I use the hive catalog to store view definitions in HDFS? I am assuming the metastore can be anything or does it have to be have MySQL? On Thu, Feb 27, 2020 at 4:46 AM kant kodali wrote

How can I programmatically set RocksDBStateBackend?

2020-02-27 Thread kant kodali
Hi All, How can I programmatically set RocksDBStateBackend? I did the following [image: Screen Shot 2020-02-27 at 4.53.38 AM.png] env.setStateBackend always shows deprecated. so what is the right way to do this in flink 1.10? Thanks!

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-27 Thread kant kodali
godfrey he wrote: > Hi Kant, if you want the store the catalog data in Local Filesystem/HDFS, > you can implement a user defined catalog (just need to implement Catalog > interface) > > Bests, > Godfrey > > kant kodali 于2020年2月26日周三 下午12:28写道: > >> Hi Jingsong, >>

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-02-25 Thread kant kodali
n close. >> I think the persisted views will be supported in 1.11. >> >> Best, >> Jark >> >> 2020年1月20日 18:46,kant kodali 写道: >> >> Hi Jingsong, >> >> Thanks a lot, I think I can live with >> TableEnvironment.createTemporaryView in

Re: Can Connected Components run on a streaming dataset using iterate delta?

2020-02-22 Thread kant kodali
Hi, Thanks for that but Looks like it is already available https://github.com/vasia/gelly-streaming in streaming but I wonder why this is not part of Flink? there are no releases either. Thanks! On Tue, Feb 18, 2020 at 9:13 AM Yun Gao wrote: >Hi Kant, > > As far as I know

Can Connected Components run on a streaming dataset using iterate delta?

2020-02-17 Thread kant kodali
Hi All, I am wondering if connected components can run on a streaming data? or say incremental batch? I see that with delta iteration not all vertices need to participate at every iteration which

Re: is streaming outer join sending unnecessary traffic?

2020-02-01 Thread kant kodali
r question. > > Cheers, > Till > > On Tue, Jan 28, 2020 at 10:46 PM kant kodali wrote: > >> Sorry. fixed some typos. >> >> I am doing a streaming outer join from four topics in Kafka lets call >> them sample1, sample2, sample3, sample4. Each of these test top

Cypher support for flink graphs?

2020-01-29 Thread kant kodali
Hi All, Can we expect open cypher support for Flink graphs? Thanks!

Re: is streaming outer join sending unnecessary traffic?

2020-01-28 Thread kant kodali
Sorry. fixed some typos. I am doing a streaming outer join from four topics in Kafka lets call them sample1, sample2, sample3, sample4. Each of these test topics has just one column which is of tuple string. my query is this SELECT * FROM sample1 FULL OUTER JOIN sample2 on sample1.f0=sample2.f0 F

is streaming outer join sending unnecessary traffic?

2020-01-28 Thread kant kodali
Hi All, I am doing a streaming outer join from four topics in Kafka lets call them sample1, sample2, sample3, sample4. Each of these test topics has just one column which is of tuple string. my query is this SELECT * FROM sample1 FULL OUTER JOIN sample2 on sample1.f0=sample2.f0 FULL OUTER JOIN sa

Re: where does flink store the intermediate results of a join and what is the key?

2020-01-28 Thread kant kodali
t;>>> QueryableStateStream/QueryableStateClient [1], or around "FOR SYSTEM_TIME >>>> AS OF" [2], or on other APIs/concepts (is there a FLIP?)? >>>> >>>> Cheers >>>> Ben >>>> >>>> [1] >>>> https

Re: where does flink store the intermediate results of a join and what is the key?

2020-01-22 Thread kant kodali
er, uid of operators is not set in Table API & SQL. > So I’m not sure whether it works or not. > > 3)You can have a custom statebackend by > implement org.apache.flink.runtime.state.StateBackend interface, and use it > via `env.setStateBackend(…)`. > > Best, > Ja

Re: where does flink store the intermediate results of a join and what is the key?

2020-01-21 Thread kant kodali
/blob/master/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/join/stream/state/JoinRecordStateViews.java#L45 > > > 2020年1月21日 18:01,kant kodali 写道: > > Hi All, > > If I run a query like this > > StreamTableEnvironment.sqlQuer

where does flink store the intermediate results of a join and what is the key?

2020-01-21 Thread kant kodali
Hi All, If I run a query like this StreamTableEnvironment.sqlQuery("select * from table1 join table2 on table1.col1 = table2.col1") 1) Where will flink store the intermediate result? Imagine flink-conf.yaml says state.backend = 'rocksdb' 2) If the intermediate results are stored in rockdb then

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-20 Thread kant kodali
lient. > - Or using hive catalog, in 1.10, we support query catalog views. > > FLIP-71 will be finished in 1.11 soon. > > Best, > Jingsong Lee > > On Sun, Jan 19, 2020 at 4:10 PM kant kodali wrote: > >> I tried the following. >> >> bsTableEnv.sqlUpdate("

Re: Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-19 Thread kant kodali
moment? Thanks! On Sat, Jan 18, 2020 at 6:24 PM kant kodali wrote: > Hi All, > > Does Flink 1.9 support create or replace views syntax in raw SQL? like > spark streaming does? > > Thanks! >

Re: some basic questions

2020-01-18 Thread kant kodali
ms to work fine although I am not sure which one is the correct usage. Thanks! On Sat, Jan 18, 2020 at 6:52 PM kant kodali wrote: > Hi Godfrey, > > Thanks a lot for your response. I just tried it with env.execute("simple > job") but I still get the same error message. > &

Re: some basic questions

2020-01-18 Thread kant kodali
n 1.10, blink > planner is more statable, we are switching the blink planner to the default > step by step [0]. > > [0] > http://mail-archives.apache.org/mod_mbox/flink-dev/202001.mbox/%3CCAELO930%2B3RJ5m4hGQ7fbS-CS%3DcfJe5ENcRmZ%3DT_hey-uL6c27g%40mail.gmail.com%3E > > kant kodali

Does Flink 1.9 support create or replace views syntax in raw sql?

2020-01-18 Thread kant kodali
Hi All, Does Flink 1.9 support create or replace views syntax in raw SQL? like spark streaming does? Thanks!

some basic questions

2020-01-18 Thread kant kodali
Hi All, 1) The Documentation says full outer join is supported however the below code just exits with value 1. No error message. import org.apache.flink.api.common.serialization.SimpleStringSchema; import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.

Why would indefinitely growing state an issue for Flink while doing stream to stream joins?

2020-01-16 Thread kant kodali
Hi All, The doc https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/joins.html#regular-joins says the following. "However, this operation has an important implication: it requires to keep both sides of the join input in Flink’s state forever. Thus, the resource usage will g

Is there a way to clean up the state based on custom/business logic?

2020-01-14 Thread kant kodali
Hi All, I read through the doc below and I am wondering if I can clean up the state based on custom logic rather min and max retention time? For example, I want to say clean up all the state where the key = foo or say the value = bar. so until the keys reach a particular value just keep accumulat

Re: are blink changes merged into flink 1.9?

2020-01-12 Thread kant kodali
ble/common.html#main-differences-between-the-two-planners > [2] > https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/common.html#create-a-tableenvironment > > > kant kodali 于2020年1月12日周日 上午7:48写道: > >> Hi All, >> >> Are blink changes merged into fl

are blink changes merged into flink 1.9?

2020-01-11 Thread kant kodali
Hi All, Are blink changes merged into flink 1.9? It looks like there are a lot of features and optimizations in Blink and if they aren't merged into flink 1.9 I am not sure on which one to use? is there any plan towards merging it? Thanks!

Re: Are there pipeline API's for ETL?

2020-01-10 Thread kant kodali
> > Here are the generic APIs list.[1] > > Best, > Vino > > [1]: > https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/stream/operators/index.html > > kant kodali 于2020年1月11日周六 上午9:06写道: >> Hi All, >> >> I am wondering if there are pipeline API's for ETL? >> >> Thanks! >> >>

Re: Are there pipeline API's for ETL?

2020-01-10 Thread kant kodali
your business logic. > > Here are the generic APIs list.[1] > > Best, > Vino > > [1]: > https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/stream/operators/index.html > > kant kodali 于2020年1月11日周六 上午9:06写道: >> Hi All, >> >> I am wondering if there are pipeline API's for ETL? >> >> Thanks! >> >>

Are there pipeline API's for ETL?

2020-01-10 Thread kant kodali
Hi All, I am wondering if there are pipeline API's for ETL? Thanks!

Re: How to emit changed data only w/ Flink trigger?

2019-11-01 Thread kant kodali
I am new to Flink so I am not sure if I am giving you the correct answer so you might want to wait for others to respond. But I think you should do .inUpsertMode() On Fri, Nov 1, 2019 at 2:38 AM Qi Kang wrote: > Hi all, > > > We have a Flink job which aggregates sales volume and GMV data of ea

is Streaming Ledger open source?

2019-11-01 Thread kant kodali
Hi All, Is https://github.com/dataArtisans/da-streamingledger an open-source project? Looks to me that this project is not actively maintained. is that correct? since the last commit is one year ago and it shows there are 0 contributors? Thanks!

Re: How to stream intermediate data that is stored in external storage?

2019-10-31 Thread kant kodali
e table C. That upsert > action itself serves as a join function, there's no need to join in Flink > at all. > > There are many tools out there can be used for that ingestion. Flink, of > course, can be used for that purpose. But for me, it's an overkill. > > Regards,

Re: How to stream intermediate data that is stored in external storage?

2019-10-31 Thread kant kodali
Hi Averell, yes, I want to run ad-hoc SQL queries on the joined data as well as data that may join in the future. For example, let's say if you take datasets A and B in streaming mode a row in A can join with a row B in some time in future let's say but meanwhile if I query the intermediate state

Are Dynamic tables backed by rocksdb?

2019-10-31 Thread kant kodali
Hi All, Are Dynamic tables backed by Rocksdb or in memory? if they are backed by RocksDB can I use SQL to query the state? Thanks!

Re: How to stream intermediate data that is stored in external storage?

2019-10-31 Thread kant kodali
Hi Averell, I want to write intermediate results (A join B) incrementally and in real-time to some external storage so I can query it using SQL. I am new to Flink so I am trying to find out if 1) such mechanism exists? 2) If not, what are the alternatives? Thanks On Thu, Oct 31, 2019 at 1:42 AM

Re: How to stream intermediate data that is stored in external storage?

2019-10-30 Thread kant kodali
never used it. > > I think you would have to implement your own custom operator that would > output changes to it’s internal state as a side output. > > Piotrek > > On 30 Oct 2019, at 16:14, kant kodali wrote: > > Hi Piotr, > > I am talking about the internal sta

Re: How to stream intermediate data that is stored in external storage?

2019-10-30 Thread kant kodali
/libs/state_processor_api.html > [2] https://flink.apache.org/feature/2019/09/13/state-processor-api.html > [3] > https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/queryable_state.html > > On 29 Oct 2019, at 16:42, kant kodali wrote: > > Hi All, > >

How to stream intermediate data that is stored in external storage?

2019-10-29 Thread kant kodali
Hi All, I want to do a full outer join on two streaming data sources and store the state of full outer join in some external storage like rocksdb or something else. And then want to use this intermediate state as a streaming source again, do some transformation and write it to some external store.

Re: can we do Flink CEP on event stream or batch or both?

2019-06-19 Thread kant kodali
same processing semantics. > > Best, > Fabian > > > Am Di., 30. Apr. 2019 um 06:49 Uhr schrieb kant kodali >: > >> Hi All, >> >> I have the following questions. >> >> 1) can we do Flink CEP on event stream or batch? >> 2) If we can do stream

https://github.com/google/zetasql

2019-05-21 Thread kant kodali
https://github.com/google/zetasql

can we do Flink CEP on event stream or batch or both?

2019-04-29 Thread kant kodali
Hi All, I have the following questions. 1) can we do Flink CEP on event stream or batch? 2) If we can do streaming I wonder how long can we keep the stream stateful? I also wonder if anyone successfully had done any stateful streaming for days or months(with or without CEP)? or is stateful stream

Re: status on FLINK-7129

2019-04-27 Thread kant kodali
> >> +1 >> >> On Tue, Apr 23, 2019, 4:57 AM kant kodali wrote: >> >>> Thanks all for the reply. I believe this is one of the most important >>> feature that differentiates flink from other stream processing engines as >>> others don't even

Re: status on FLINK-7129

2019-04-23 Thread kant kodali
Konstantin Knauf wrote: > > Hi Kant, > > as far as I know, no one is currently working on this. Dawid (cc) maybe > knows more. > > Cheers, > > Konstantin > > On Sat, Apr 20, 2019 at 12:12 PM kant kodali wrote: > >> Hi All, >> >> There seems to be

status on FLINK-7129

2019-04-20 Thread kant kodali
Hi All, There seems to be a lot of interest for https://issues.apache.org/jira/browse/FLINK-7129 Any rough idea on the status of this issue? Thanks!

Re: Does Flink support stream-stream outer joins in the latest version?

2018-03-07 Thread kant kodali
gt;> FROM Orders o, Shipments s >> WHERE o.id = s.orderId AND >> o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime > > > Non-windowed Join: > >> SELECT * >> FROM Orders o, Shipments s >> WHERE o.id = s.orderId > > >

Re: Does Flink support stream-stream outer joins in the latest version?

2018-03-07 Thread kant kodali
work in progress. >> >> Hope that helps. >> >> Best, >> Xingcan >> >> [1] https://ci.apache.org/projects/flink/flink-docs-master/ >> dev/table/tableApi.html#joins >> [2] https://ci.apache.org/projects/flink/flink-docs-master/ >> dev/table/sq

Does Flink support stream-stream outer joins in the latest version?

2018-03-06 Thread kant kodali
Hi All, Does Flink support stream-stream outer joins in the latest version? Thanks!

Re: Does Queryable State only support K/V queries not SQL?

2018-03-03 Thread kant kodali
ports key point queries, i.e., you can query a > keyed state for the value of a key. > Support for SQL is not on the roadmap. > > Best, Fabian > > 2018-02-25 14:26 GMT+01:00 kant kodali : > >> Hi All, >> >> 1) Does Queryable State support SQL? By which I mean

Re: Caused by: java.lang.ClassNotFoundException: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase

2018-02-26 Thread kant kodali
Thanks a lot! On Mon, Feb 26, 2018 at 9:19 AM, Nico Kruber wrote: > Judging from the code, you should separate different jars with a colon > ":", i.e. "—addclasspath jar1:jar2" > > > Nico > > On 26/02/18 10:36, kant kodali wrote: > > Hi Gordon,

Re: Caused by: java.lang.ClassNotFoundException: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase

2018-02-26 Thread kant kodali
s to the dependency > jars. > > Cheers, > Gordon > > > On 25 February 2018 at 12:22:28 PM, kant kodali (kanth...@gmail.com) > wrote: > > Exception went away after downloading > flink-connector-kafka-base_2.11-1.4.1.jar > to lib folder > > On Sat, Feb 24, 2

Does Queryable State only support K/V queries not SQL?

2018-02-25 Thread kant kodali
Hi All, 1) Does Queryable State support SQL? By which I mean I can do issue a full-fledged sql query like say ("select * from table where foo='hello' group by name") 2) Does Queryable state support offset and limit? Because if I have a million rows I don't want to get all at once. Sorry if these

Re: Caused by: java.lang.ClassNotFoundException: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase

2018-02-24 Thread kant kodali
Exception went away after downloading flink-connector-kafka-base_2.11-1.4.1.jar to lib folder On Sat, Feb 24, 2018 at 6:36 PM, kant kodali wrote: > Hi, > > I couldn't get flink and kafka working together. It looks like all > examples I tried from web site fails with the fo

Re: How to create TableEnvrionment using scala-shell

2018-02-24 Thread kant kodali
Please ignore this. I fixed it by moving opt/flink-table_2.11-1.4.1.jar to lib/flink-table_2.11-1.4.1.jar On Sat, Feb 24, 2018 at 4:06 AM, kant kodali wrote: > Hi All, > > I am new to Flink and I am wondering how to create a TableEnvironment in > scala-shell? I get an import error

Caused by: java.lang.ClassNotFoundException: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase

2018-02-24 Thread kant kodali
Hi, I couldn't get flink and kafka working together. It looks like all examples I tried from web site fails with the following Exception. Caused by: java.lang.ClassNotFoundException: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase *or when I do something like this like it is

How to create TableEnvrionment using scala-shell

2018-02-24 Thread kant kodali
Hi All, I am new to Flink and I am wondering how to create a TableEnvironment in scala-shell? I get an import error below. I am using Flink 1.4.1 63:error: object table is not a member of package org.apache.flink` I tried to do the following ./start-scala-shell.sh local import org.apache.fli

Re: Does Flink has a JDBC server where I can submit Calcite Streaming Queries?

2017-09-08 Thread kant kodali
sion would work with the regular Flink job client, i.e., it > would pickup the regular Flink config. > > Best, Fabian > > 2017-09-08 10:05 GMT+02:00 kant kodali : > >> Hi Fabian, >> >> Thanks for the response. I understand the common approach is to write a >>

Re: Does Flink has a JDBC server where I can submit Calcite Streaming Queries?

2017-09-08 Thread kant kodali
ttps://issues.apache.org/jira/browse/FLINK-7594 > > > 2017-09-07 21:43 GMT+02:00 kant kodali : > >> Hi All, >> >> Does Flink has a JDBC server where I can submit Calcite Streaming >> Queries? such that I get Stream of responses back from Flink forever via >> JDBC ? What is the standard way to do this? >> >> Thanks, >> Kant >> > >

Does Flink has a JDBC server where I can submit Calcite Streaming Queries?

2017-09-07 Thread kant kodali
Hi All, Does Flink has a JDBC server where I can submit Calcite Streaming Queries? such that I get Stream of responses back from Flink forever via JDBC ? What is the standard way to do this? Thanks, Kant

Re: can flink do streaming from data sources other than Kafka?

2017-09-07 Thread kant kodali
zu-Li (Gordon) Tai wrote: > Ah, I see. I’m not aware of any existing work / JIRAs on streaming sources > for Cassandra or HBase, only sinks. > If you are interested in one, could you open JIRAs for them? > > > On 7 September 2017 at 4:11:05 PM, kant kodali (kanth...@gmail.co

Re: can flink do streaming from data sources other than Kafka?

2017-09-07 Thread kant kodali
ts/flink/flink-docs- > release-1.3/dev/connectors/index.html > [2] http://bahir.apache.org/ > [3] https://issues.apache.org/jira/browse/FLINK-4266 > > On 7 September 2017 at 2:58:38 PM, kant kodali (kanth...@gmail.com) wrote: > > Hi All, > > I am wondering if Flink can

can flink do streaming from data sources other than Kafka?

2017-09-06 Thread kant kodali
Hi All, I am wondering if Flink can do streaming from data sources other than Kafka. For example can Flink do streaming from a database like Cassandra, HBase, MongoDb to sinks like says Elastic search or Kafka. Also for out of core stateful streaming. Is RocksDB the only option? Can I use some ot

Comparsion between Flink vs Kafka Stream Processing

2017-04-11 Thread kant kodali
Hi All, I have simple question. Here is a article that addresses the differences between Flink vs Kafka Streaming (in fact there is a table if you scroll down). While I understand those are the difference

Hi

2017-04-07 Thread kant kodali
Hi All, I read the docs however I still have the following question For Stateful stream processing is HDFS mandatory? because In some places I see it is required and other places I see that rocksDB can be used. I just want to know if HDFS is mandatory for Stateful stream processing? Thanks!

Re: Can we do batch writes on cassandra using flink while leveraging the locality?

2016-10-28 Thread kant kodali
taining state in Flink and the > partitioning changes, your job might produce inaccurate output. If, on the > other hand, you are only using the partitioner just before the output, > dynamic partitioning changes might be ok. > > > From: kant kodali > Date: Thursday, October 27,

Can we do batch writes on cassandra using flink while leveraging the locality?

2016-10-27 Thread kant kodali
Can we do batch writes on Cassandra using Flink while leveraging the locality? For example the batch writes in Cassandra will put pressure on the coordinator but since the connectors are built by leveraging the locality I was wondering if we could do batch of writes on a node where the batch belong