Re: Incompatible data types while using firehose sink

2022-05-11 Thread yu'an huang
Hi, Your code is working fine in my computer. What is the Flink version you are using. > On 12 May 2022, at 3:39 AM, Zain Haider Nemati wrote: > > Hi Folks, > Getting this error when sinking data to a firehosesink, would really > appreciate some help ! > > DataStream inputStream = env.ad

How to test Flink SQL UDF with open method?

2022-05-11 Thread zhouhaifengmath
Hi, I am trying to test a flink sql udf which has open method to get some parameters with Flink1.14, but i can't find an example to set those parameters in a test. Can someone give me a example on this question? Thanks for your help~Thanks && Regards

Re:Re: How can I set job parameter in flink sql

2022-05-11 Thread wang
Ok, got it. Thanks so much! Regards, Hunk -- 发自我的网易邮箱手机智能版 在 2022-05-11 16:46:14,yuxia 写道: Hi, AFAK, you can't get the parameter setted via Flink SQL client in udf. If you still want to get the parameters in your udf, you can use the following code to set the parameter: env = StreamE

Re: How to get flink to use POJO serializer when enum is present in POJO class

2022-05-11 Thread Tejas B
Hi Arvid, Thanks for replying. But I have all the getters and setters in the example. As you can see, the val2 field is commented and hence its getter and setter are commented out. When restoring from a savepoint, I uncomment these and get errors. If I remove reference to the enum RuleType from the

Configuring heap size for Flink client

2022-05-11 Thread Zhanghao Chen
Hi guys, I'm developing a submission platform for Flink jobs deployed in containers using Flink client for submission. I found that the Flink client by default uses the default JVM heap setting (with an initial heap size of 1/64 host memory, and a max heap size of 1/4 host memory). When the con

http stream as input data source

2022-05-11 Thread Harald Busch
Hi, is there a http data stream as data source ? I only see socketTextStream and other predefined stream sources. It seems that I have to use fromCollection, fromElements ... and prepare the collection for myself. Thanks Regards

AWS EMR Yarn api shutdown flink task container does`t work

2022-05-11 Thread WuKong
hi all : now I use AWS EMR on EC2 , and I running flink application on EMR。 sometimes I want resize task number to save on budget。 when I request resize task number ,the node can`t dimss because some flink task runing on it. so I using some yarn api to kill this contaier which runing task ,

Re: [Discuss] Creating an Apache Flink slack workspace

2022-05-11 Thread Yun Tang
Hi all, I think forum might be a good choice for search and maintain. However, unlike slack workspace, it seems no existing popular product could be leveraged easily. Thus, I am +1 to create an Apache Flink slack channel. If the ASF slack cannot be joined easily for most of users, I prefer to s

Re: [Discuss] Creating an Apache Flink slack workspace

2022-05-11 Thread Jingsong Li
Hi all, Regarding using ASF slack. I share the problems I saw in the Apache Druid community. [1] > As you may have heard, it’s become increasingly difficult for new users without an @apache.org email address to join the ASF #druid Slack channel. ASF Infra disabled the option to publicly provide a

Re: Migrating Flink apps across cloud with state

2022-05-11 Thread Hemanga Borah
Thank you for the suggestions, guys! @Andrew Otto This is the way we will most likely go. However, this will require us to meddle with the Flink consumer codebase. And looks like there is no other way around it. We will add some custom code to perform offset resetting for specific savepoints. @Ko

Re: Flink on Native K8s jobs turn in to `SUSPENDED` status unexpectedly.

2022-05-11 Thread Yang Wang
The SUSPENDED state is usually caused by lost leadership. Maybe you could find more information about leader in the JobManager and TaskManager logs. Best, Yang Xiaolong Wang 于2022年5月11日周三 19:18写道: > Hello, > > Recently our Flink jobs on Native K8s encountered failing in the > `SUSPENDED` status

Source without persistent state

2022-05-11 Thread Alexey Trenikhun
Hello, I'm working on custom Source, something like heartbeat generator using new Source API, HeartSource is constructed with list of Kafka topics, SplitEnumerator for each topic queries number of partitions, and either creates a split per topic-partition or single split for all topic-partition

Re: [Discuss] Creating an Apache Flink slack workspace

2022-05-11 Thread Xintong Song
> > To make some progress, maybe we decide on chat vs forum vs none and then > go into a deeper discussion on the implementation or is there anything > about Slack that would be complete blocker for the implementation? > Sure, then I'd be +1 for chat. From my side, the initiative is more about mak

Re: unsubscribe

2022-05-11 Thread yuxia
To unsubscribe, you can send email to user-unsubscr...@flink.apache.org with any object. Best regards, Yuxia 发件人: "Henry Cai" 收件人: "User" 发送时间: 星期四, 2022年 5 月 12日 上午 1:14:43 主题: unsubscribe unsubscribe

Checkpointing - Job Manager S3 Head Requests are 404s

2022-05-11 Thread Aeden Jameson
We're using S3 to store checkpoints. They are taken every minute. I'm seeing a large number of 404 responses from S3 being generated by the job manager. The order of the entries in the debugging log would imply that it's a result of a HEAD request to a key. For example all the incidents look like t

Incompatible data types while using firehose sink

2022-05-11 Thread Zain Haider Nemati
Hi Folks, Getting this error when sinking data to a firehosesink, would really appreciate some help ! DataStream inputStream = env.addSource(new FlinkKafkaConsumer<>("xxx", new SimpleStringSchema(), properties)); Properties sinkProperties = new Properties(); sinkProperties.put(AWSCon

Incompatible data types while using firehose sink

2022-05-11 Thread Zain Haider Nemati
Hi Folks, Getting this error when sinking data to a firehosesink, would really appreciate some help ! DataStream inputStream = env.addSource(new FlinkKafkaConsumer<>("xxx", new SimpleStringSchema(), properties)); Properties sinkProperties = new Properties(); sinkProperties.put(AWSCon

How to define TypeInformation for Flink recursive resolved POJO

2022-05-11 Thread Fuyao Li
Hi Community, I have a POJO that has nested recursively resolved structure. How should I define the @TypeInfo annotation correctly to avoid stack overflow exception when starting the application. Basically, Class Metadata Map fields Class FieldDefinition Metadata parentMetadata The Metadata c

unsubscribe

2022-05-11 Thread Henry Cai
unsubscribe

Re: Practical guidance with Scala and Flink >= 1.15

2022-05-11 Thread Roman Grebennikov
As a yet another attempt to make Flink work with scala 2.13/3.x, we went further and and cross-built a forked version of Flink's Scala API: https://github.com/findify/flink-scala-api Check the github repo for details, but if you can afford re-bootstr

[QUESTION] In Flink k8s Application mode with HA can not using History Server for history backend

2022-05-11 Thread 谭家良
In Flink k8s application mode with high-availability, it's job id always 00, but in history server, it make job's id for the key. How can I using the application mode with HA and store the history job status with history server? Best, tanjialiang.

Re: Resizing kube container sizes dynamically for custom jobs

2022-05-11 Thread Márton Balassi
Hi Morgan, Jobs running in a session cluster share the taskmanagers, so you are not able to configure them on a per job basis. I welcome you to check out the Flink Kubernetes Operator's session job example [1] that highlights this behavior: You specify container resources when you submit the sessi

Resizing kube container sizes dynamically for custom jobs

2022-05-11 Thread Geldenhuys, Morgan Karl
Greetings all, I have a question concerning resource allocation for Apache flink. I have a flink native session cluster running and im interested in rolling out multiple jobs. However, I would like to size the container resources (CPU and Memory) differently for each job, is this possible? i.e

Flink on Native K8s jobs turn in to `SUSPENDED` status unexpectedly.

2022-05-11 Thread Xiaolong Wang
Hello, Recently our Flink jobs on Native K8s encountered failing in the `SUSPENDED` status and got restarted for no reason. Flink version: 1.13.2 Logs: ``` 2022-05-11 05:01:41 2022-05-10 21:01:41,771 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering checkpoint 17921

Re: [Discuss] Creating an Apache Flink slack workspace

2022-05-11 Thread Konstantin Knauf
I don't think we can maintain two additional channels. Some people have already concerns about covering one additional channel. I think, a forum provides a better user experience than a mailing list. Information is structured better, you can edit messages, sign up and search is easier. To make so

Re: How can I set job parameter in flink sql

2022-05-11 Thread yuxia
Hi, AFAK, you can't get the parameter setted via Flink SQL client in udf. If you still want to get the parameters in your udf, you can use the following code to set the parameter: env = StreamExecutionEnvironment.getExecutionEnvironment parameter = new HashMap(); parameter .put(" black_list_

Re: OOM errors cause by the new KafkaSink API

2022-05-11 Thread Arvid Heise
Hi Hua Wei, Thanks for the investigation. Could you provide a heap dump before the crash? The OOM stacktrace that you are showing is rather random (at RPC message exchange). We need to see where the heap is growing. Alternatively, you can take heap dumps at different points in time and compare yo

Re: Practical guidance with Scala and Flink >= 1.15

2022-05-11 Thread Salva Alcántara
Sorry, I forgot the link of the repo: https://github.com/salvalcantara/flink-scala Regards, Salva On Wed, May 11, 2022 at 9:32 AM Salva Alcántara wrote: > Thanks Martijn, my conclusion so far is that Java is a safe bet. > > In the meantime, a friend and I have spent some time trying to make >

Re: Practical guidance with Scala and Flink >= 1.15

2022-05-11 Thread Ran Tao
Hi, guys. I posted a jdk11 & jdk17 issue [FLINK-27549] recently which involved upgrading scala [2] of current discussion. It shows that the current flink project is not a completed or pure jdk11 version.(same problem with higher version). because

Re: Practical guidance with Scala and Flink >= 1.15

2022-05-11 Thread Salva Alcántara
Thanks Martijn, my conclusion so far is that Java is a safe bet. In the meantime, a friend and I have spent some time trying to make `flink-scala` work with Flink 1.15 and Scala 2.13. We partly followed the discussions in [1] (FLINK-13414) to fix all the compilations errors. Note that this is just

Query on using Apache Flink with Confluent Kafka

2022-05-11 Thread elakiya udhayanan
Hi Team, I have a requirement to read kafka events through Apache Flink and do processing with the same. Now this kafka topic which produces the event to the Apache Flink is a confluent kafka and it is hosted as a kubernetes pod in the Docker container. The actual problem is I am unable to consu

Re: [ANNOUNCE] Apache Flink Table Store 0.1.0 released

2022-05-11 Thread Becket Qin
Really excited to see the very first release of the flink-table-store! Kudos to everyone who helped with this effort! Cheers, Jiangjie (Becket) Qin On Wed, May 11, 2022 at 1:55 PM Jingsong Lee wrote: > The Apache Flink community is very happy to announce the release of Apache > Flink Table S

Re: Practical guidance with Scala and Flink >= 1.15

2022-05-11 Thread Martijn Visser
Hi Matthias, Given the current state of Scala support in the Flink community (there is a major lack in Scala maintainers), it is my personal opinion that we should consider deprecating the current Scala APIs and replace those with new Scala APIs, which are 'just' wrappers for the Java API. This de

Re: Getting JSON kafka source parsing error with DDL

2022-05-11 Thread Shubham Bansal
Never mind. Figured out. Wrong connector arguments. On Tue, May 10, 2022 at 11:19 PM Shubham Bansal < shubham.bansal2...@gmail.com> wrote: > Hi Everyone, > > I am trying to fix the flink-playground for version 1.14.4 and was working > on fixing pyflink-walkthrough and I getting following error >