Re: Flink sql nested elements

2020-06-04 Thread Leonard Xu
Hi,Ramana For nested data type, Flink use dot (eg a.b.c) to visit nested elements. Your SQL syntax looks right, which Flink version are you using? And could you post your Avro Schema file and DDL ? Best, Leonard Xu > 在 2020年6月5日,03:34,Ramana Uppala 写道: > > We have Avro schema that contains n

Re: SQL Expression to Flink FilterFunction?

2020-06-04 Thread Jark Wu
This is possible but may need some development. There is a similar util in table tests called `org.apache.flink.table.expressions.utils.ExpressionTestBase` [1], it converts/translates expressions (either Table API Expression or SQL expression) into a MapFunction. I think you can imitate the way of

Re: Multiple Sinks for a Single Soure

2020-06-04 Thread Piotr Nowojski
Hi Prasanna, That’s good to hear and thanks for confirming that it works :) Piotrek > On 3 Jun 2020, at 16:09, Prasanna kumar wrote: > > Piotr and Alexander , > > I have fixed the programmatic error in filter method and it is working now. > > Thanks for the detailed help from both of you. A

Re: Flink Dashboard UI Tasks hard limit

2020-06-04 Thread Xintong Song
Hi Vijay, >From the information you provided (the configurations, error message & screenshot), I'm not able to find out what is the problem and how to resolve it. The error message comes from a healthy task manager, who discovered that another task manager is not responding. We would need to look

Re: SQL Expression to Flink FilterFunction?

2020-06-04 Thread Leonard Xu
Hi, Theo Currently, It’s hard to do this in your DataStream application from my understanding, because converting sql expression to Flink operator happens in underlying table planner (more precisely in code generate phase) and it does not expose interface to user so that you can not assign oper

Re: Native K8S not creating TMs

2020-06-04 Thread Yang Wang
If you have created the role binding "flink-role-binding-default" successfully, then it should not be the RBAC issue. It seems that kubernetes-client in JobManager pod could not contact to K8s apiserver due to okhttp issue with java 8u252. Could you add the following config option to disable http2

Re: FlinkKinesisProducer sends data to only 1 shard. Is it because I don't have "AggregationEnabled" set to false ?

2020-06-04 Thread Vijay Balakrishnan
Hi, Looks like I am sending a Map to Kinesis and it is being sent to 1 partition only. *How can I make this distribute across multiple partitions/shards on the Kinesis Data stream with this Map* data ? *Sending to Kinesis*: DataStream> influxToMapKinesisStream = enrichedMGStream.map(influxDBPoint

FlinkKinesisProducer sends data to only 1 shard. Is it because I don't have "AggregationEnabled" set to false ?

2020-06-04 Thread Vijay Balakrishnan
Hi, My FlinkKinesisProducer sends data to only 1 shard. Is it because I don't have "AggregationEnabled" set to false ? flink_connector_kinesis_2.11 : flink version 1.9.1 //Setup Kinesis Producer Properties kinesisProducerConfig = new Properties(); kinesisProducerConfig.setProperty

Flink sql nested elements

2020-06-04 Thread Ramana Uppala
We have Avro schema that contains nested structure and when querying using Flink SQL, we are getting below error. Exception in thread "main" java.lang.AssertionError at org.apache.calcite.sql.parser.SqlParserPos.sum_(SqlParserPos.java:236) at org.apache.calcite.sql.parser.SqlPars

Stopping a job

2020-06-04 Thread M Singh
Hi: I am running a job which consumes data from Kinesis and send data to another Kinesis queue.  I am using an older version of Flink (1.6), and when I try to stop the job I get an exception  Caused by: java.util.concurrent.ExecutionException: org.apache.flink.runtime.rest.util.RestClientExc

Creating TableSchema from the Avro Schema

2020-06-04 Thread Ramana Uppala
Hi, In Flink 1.9, we have option to create the TableSchema form TypeInformation. We have used below. TypeInformation typeInfo = AvroSchemaConverter.convertToTypeInfo(schema); TableSchema tableSchema = TableSchema.fromTypeInfo(typeInfo); However TableSchema's fromTypeInfo method is deprecated

SQL Expression to Flink FilterFunction?

2020-06-04 Thread Theo Diefenthal
Hi there, I have a full DataStream pipeline (i.e. no upper level APIs like Table or SQL are involved). In the mid of the pipeline, I now need to filter on a SQL WHERE expression, e.g. "user= 'pitter' AND age > 10", which can include REGEXP and arbitrarily nested AND/OR constructs. I was wond

Avro Arrat type validation error

2020-06-04 Thread Ramana Uppala
Hi, Avro schema contains Array type and we created TableSchema out of the AvroSchema and created a table in catalog. In the catalog, this specific filed type shown as ARRAY. We are using AvroRowDeserializationSchema with the connector and returnType of TableSource showing Array mapped to LEGACY

Re: Native K8S not creating TMs

2020-06-04 Thread kb
Thanks! I do not see any pods of the form `flink-taskmanager-1-1`, so I tried the exec suggestion. The logs are attached below. Is there a quick RBAC check I could perform? I followed the command on the docs page linked (kubectl create clusterrolebinding flink-role-binding-default --clusterrole=ed

Re: Window Function use case;

2020-06-04 Thread Chesnay Schepler
If you input data already contains both the SensorID and FactoryID, why would the following not be sufficient? DataStream sensorEvents = ...; sensorEvents .filter(sensorEvent -> sensorEvent.Status.equals("alerte")) .map(sensorEvent -> sensorEvent.FactoryID) .addSink() If the problem is that

Window Function use case;

2020-06-04 Thread Aissa Elaffani
Hello guys, I have a use case, where I am receiving data from sensors about their status (Normal or Alerte), {SensorID:"1", FactoryID:"1", Status:"Normal" ..}, a factory can contain a lot of sensors, so what I want to do is, if the status of one sensor in a factory, is Alerte I want to raise an ale

Re: Suggestions for using both broadcast sync and conditional async-io

2020-06-04 Thread orionemail
Thanks for the response, I had not seen the state processor API, somehow I missed that. Regarding your second point, this is basically an ID mapping service so I need the ID's persisted in the DynamoDB (or indeed any other external store) so that other applications may also use the 'mapped' ID

Re: Getting Window information from coGroup functin

2020-06-04 Thread Jaswin Shah
However, You can evaluate your time window, get the information of window of which an event is a part from the context of processFunction or any RichFunction you are passing the events to. So, on each event arrival you will be able to check which window the element is part of and from Window obj

Re: Getting Window information from coGroup functin

2020-06-04 Thread Jaswin Shah
I think here apply function would receive only the events but not necessarily a complete window at same time. From: Dawid Wysakowicz Sent: Thursday, June 04, 2020 13:39 To: Sudan S; user@flink.apache.org Cc: Aljoscha Krettek Subject: Re: Getting Window informatio

Re: Getting Window information from coGroup functin

2020-06-04 Thread Dawid Wysakowicz
I am afraid there is no way to do that. At least I could not think of a way to do it. Maybe @aljoscha cc'ed could help here. On 29/05/2020 13:25, Sudan S wrote: > Hi, > > I have a usecase where i want to join two streams. I am using coGroup > for this > > KeyBuilder leftKey = new > KeyBuilder(job