Re: Re: Where can i find MySQL retract stream table sink java soure code?

2020-03-24 Thread Jingsong Li
It is not the mean what you said. There are two queries: append query and update query. For update query, there are two ways to handle, one is retract, another is upsert. So the thing is a sink can choose a mode to handle update query. Just choose one is OK. You could read more in [1]. [1] http

Re: Re: Where can i find MySQL retract stream table sink java soure code?

2020-03-24 Thread wangl...@geekplus.com.cn
Thanks Jingsong. So JDBCTableSink now suport append and upsert mode. Retract mode not available yet. It is right? Thanks, Lei wangl...@geekplus.com.cn Sender: Jingsong Li Send Time: 2020-03-25 11:39 Receiver: wangl...@geekplus.com.cn cc: user Subject: Re: Where can i find MySQL retrac

Re: Where can i find MySQL retract stream table sink java soure code?

2020-03-24 Thread Jingsong Li
Hi, Maybe you have some misunderstanding to upsert sink. You can take a look to [1], it can deal with "delete" records. [1] https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/dynamic_tables.html#table-to-stream-conversion Best, Jingsong Lee On Wed, Mar 25, 2020 at 11:37

Re: Where can i find MySQL retract stream table sink java soure code?

2020-03-24 Thread Jingsong Li
Hi, This can be a upsert stream [1], and JDBC has upsert sink now [2]. [1] https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/dynamic_tables.html#table-to-stream-conversion [2] https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connect.html#jdbc-connector Be

Re: Re: Where can i find MySQL retract stream table sink java soure code?

2020-03-24 Thread wangl...@geekplus.com.cn
Seems it is here: https://github.com/apache/flink/tree/master/flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc There's no JDBCRetractTableSink, only append and upsert. I am confused why the MySQL record can be deleted. Thanks, Lei wangl...@geekplus.com.cn Sen

Re: Re: Where can i find MySQL retract stream table sink java soure code?

2020-03-24 Thread wangl...@geekplus.com.cn
Thanks Jingsong. When executing this sql, the mysql table record can be deleted. So i guess it is a retract stream. I want to know the exactly java code it is generated and have a look at it. Thanks, Lei wangl...@geekplus.com.cn Sender: Jingsong Li Send Time: 2020-03-25 11:14 Receiver:

Re: Where can i find MySQL retract stream table sink java soure code?

2020-03-24 Thread Jingsong Li
Hi, This can be a upsert stream [1] Best, Jingsong Lee On Wed, Mar 25, 2020 at 11:12 AM wangl...@geekplus.com.cn < wangl...@geekplus.com.cn> wrote: > > Create one table with kafka, another table with MySQL using flinksql. > Write a sql to read from kafka and write to MySQL. > > INSERT INTO my

Where can i find MySQL retract stream table sink java soure code?

2020-03-24 Thread wangl...@geekplus.com.cn
Create one table with kafka, another table with MySQL using flinksql. Write a sql to read from kafka and write to MySQL. INSERT INTO mysqlTable SELECT status, COUNT(order_no) AS num FROM (SELECT order_no, LAST_VALUE(status) AS status FROM kafkaTable GROUP BY order_no)

Re: Deploying native Kubernetes via yaml files?

2020-03-24 Thread Yang Wang
Hi Niels, I have created a ticket[1] to track the yaml file submission for native K8s integration. Feel free to share your significative thoughts about this way. [1]. https://issues.apache.org/jira/browse/FLINK-16760 Best, Yang Niels Basjes 于2020年3月24日周二 下午11:10写道: > Thanks. > I'll have a l

Re: How to calculate one alarm strategy for each device or one alarm strategy for each type of IOT device

2020-03-24 Thread yang xu
Hi Dawid I use Flink to calculate IOT device alarms,My scenario is that each device has an independent alarm strategy,For example, I calculate that the temperature of 10 consecutive event data of a device is higher than 10 degrees。 I use: sourceStream.keyBy("deviceNo") .flatMap(new

Re: java.lang.AbstractMethodError when implementing KafkaSerializationSchema

2020-03-24 Thread Steve Whelan
Hi Arvid, Interestingly, my job runs successfully in a docker container (image* flink:1.9.0-scala_2.11*) but is failing with the *java.lang.AbstractMethodError* on AWS EMR (non-docker). I am compiling with java version OpenJDK 1.8.0_242, which is the same version my EMR cluster is running. Though

Re: Kerberos authentication for SQL CLI

2020-03-24 Thread Dawid Wysakowicz
I think the reason why its different from the CliFrontend is that the sql client is way younger and as far as I know never reached "production" readiness. (as per the docs [1], it's still marked as Beta, plus see the first "Attention" ;) ). I think it certainly makes sense to have a proper securit

Re: Deploying native Kubernetes via yaml files?

2020-03-24 Thread Yang Wang
Hi Niels, Currently, the native integration Flink cluster could not be created via yaml file. The reason why we introduce the native integration is for the users who are not familiar with K8s and kubectl. So we want to make it easier for our Flink users to deploy Flink cluster on K8s. However, i

Re: Lack of KeyedBroadcastStateBootstrapFunction

2020-03-24 Thread Dawid Wysakowicz
Hi, I am not very familiar with the State Processor API, but from a brief look at it, I think you are right. I think the State Processor API does not support mixing different kinds of states in a single operator for now. At least not in a nice way. Probably you could implement the KeyedBroadcastSt

Re: Deploying native Kubernetes via yaml files?

2020-03-24 Thread Ufuk Celebi
PS: See also https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html#flink-job-cluster-on-kubernetes On Tue, Mar 24, 2020 at 2:49 PM Ufuk Celebi wrote: > Hey Niels, > > you can check out the README with example configuration files here: > https://github.com/ap

Re: Deploying native Kubernetes via yaml files?

2020-03-24 Thread Ufuk Celebi
Hey Niels, you can check out the README with example configuration files here: https://github.com/apache/flink/tree/master/flink-container/kubernetes Is that what you were looking for? Best, Ufuk On Tue, Mar 24, 2020 at 2:42 PM Niels Basjes wrote: > Hi, > > As clearly documented here > https

Deploying native Kubernetes via yaml files?

2020-03-24 Thread Niels Basjes
Hi, As clearly documented here https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/native_kubernetes.html the current way of deploying Flink natively on Kubernetes is by running the ./bin/kubernetes-session.sh script that runs some Java code that does "magic" to deploy in on the

Re: Issue with single job yarn flink cluster HA

2020-03-24 Thread Andrey Zagrebin
Hi Dinesh, If the current leader crashes (e.g. due to network failures) then getting these messages do not look like a problem during the leader re-election. They look to me just as warnings that caused failover. Do you observe any problem with your application? Does the failover not work, e.g. n

Re: Kerberos authentication for SQL CLI

2020-03-24 Thread Gyula Fóra
Thanks Dawid, I think you are right that most of the things should work like this just fine. Maybe some catalogs will need this at some point but not right now, I was just wondering why is this different from how the CliFrontend works which also installs the security context on the Client side. I

Re: How to calculate one alarm strategy for each device or one alarm strategy for each type of IOT device

2020-03-24 Thread Dawid Wysakowicz
Hi, Could you elaborate a bit more what do you want to achieve. What have you tried so far? Could you share some code with us? What problems are you facing? From the vague description you provided you should be able to design it with e.g. KeyedProcessFunction[1] Best, Dawid [1] https://ci.apac

Re: Issues with Watermark generation after join

2020-03-24 Thread Timo Walther
Or better: "But for sources, you need to emit a watermark from all sources in order to have progress in event-time." On 24.03.20 13:09, Timo Walther wrote: Hi, 1) yes with "partition" I meant "parallel instance". If the watermarking is correct in the DataStream API. The Table API and SQL wi

Re: Issues with Watermark generation after join

2020-03-24 Thread Timo Walther
Hi, 1) yes with "partition" I meant "parallel instance". If the watermarking is correct in the DataStream API. The Table API and SQL will take care that it remains correct. E.g. you can only perform a TUMBLE window if the timestamp column has not lost its time attribute property. A regular JO

Re: Importance of calling mapState.clear() when no entries in mapState

2020-03-24 Thread Dawid Wysakowicz
I think there should be no reason to do that. Best, Dawid On 24/03/2020 09:29, Ilya Karpov wrote: > Hi, > > given: > - flink 1.6.1 > - stateful function with MapState mapState = //init logic; > > Is there any reason I should call mapState.clear() if I know beforehand that > there are no entrie

Re: Kerberos authentication for SQL CLI

2020-03-24 Thread Dawid Wysakowicz
Hi Gyula, As far as I can tell SQL cli does not support Kerberos natively. SQL CLI submits all the queries to a running Flink cluster. Therefore if you kerberize the cluster the queries will use that configuration. On a different note. Out of curiosity. What would you expect the SQL CLI to use th

Re: Issues with Watermark generation after join

2020-03-24 Thread Dominik Wosiński
Hey Timo, Thanks a lot for this answer! I was mostly using the DataStream API, so that's good to know the difference. I have followup questions then, I will be glad for clarification: 1) So, for the SQL Join operator, is the *partition *the parallel instance of operator or is it the table partitio

Re: Object has non serializable fields

2020-03-24 Thread Kostas Kloudas
Hi Eyal, This is a known issue which is fixed now (see [1]) and will be part of the next releases. Cheers, Kostas [1] https://issues.apache.org/jira/browse/FLINK-16371 On Tue, Mar 24, 2020 at 11:10 AM Eyal Pe'er wrote: > > Hi all, > > I am trying to write a sink function that retrieves string

RE: Need help on timestamp type conversion for Table API on Pravega Connector

2020-03-24 Thread B.Zhou
Hi, Thanks for the information. I replied in the comment of this issue: https://issues.apache.org/jira/browse/FLINK-16693?focusedCommentId=17065486&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17065486 Best Regards, Brian -Original Message- From: Timo

Kerberos authentication for SQL CLI

2020-03-24 Thread Gyula Fóra
Hi! Does the SQL CLI support Kerberos Authentication? I am struggling to find any use of the SecurityContext in the SQL CLI logic but maybe I am looking in the wrong place. Thank you! Gyula

Object has non serializable fields

2020-03-24 Thread Eyal Pe'er
Hi all, I am trying to write a sink function that retrieves string and creates compressed files in time buckets. The code is pretty straight forward and based on CompressWriterFactoryTest

Re: Issues with Watermark generation after join

2020-03-24 Thread Timo Walther
Hi Dominik, the big conceptual difference between DataStream and Table API is that record timestamps are part of the schema in Table API whereas they are attached internally to each record in DataStream API. When you call `y.rowtime` during a stream to table conversion, the runtime will extra

Re: Need help on timestamp type conversion for Table API on Pravega Connector

2020-03-24 Thread Timo Walther
This issue is tracked under: https://issues.apache.org/jira/browse/FLINK-16693 Could you provide us a little reproducible example in the issue? I think that could help us in resolving this issue quickly in the next minor release. Thanks, Timo On 20.03.20 03:28, b.z...@dell.com wrote: Hi,

Re: Dynamic Flink SQL

2020-03-24 Thread Arvid Heise
Hi Krzysztof, from my past experience as data engineer, I can safely say that users often underestimate the optimization potential and techniques of the used systems. I implemented a similar thing in the past, where I parsed up to 500 rules reading from up to 10 data sources. The basic idea was to

Importance of calling mapState.clear() when no entries in mapState

2020-03-24 Thread Ilya Karpov
Hi, given: - flink 1.6.1 - stateful function with MapState mapState = //init logic; Is there any reason I should call mapState.clear() if I know beforehand that there are no entries in mapState (e.g. mapState.iterator().hasNext() returns false)? Thanks in advance!