We've noticed the following difference in sql when upgrading from flink 1.14.5
to 1.15.2 around characters that are escaped in an sql statement:
This statement:
tableEnvironment.executeSql("select * from testTable WHERE lower(field1) LIKE
'b\"cd\"e%'");
produces a runtime error in flink 1.15.2
Sent: Wednesday, July 27, 2022 11:16 AM
To: PACE, JAMES
Cc: user@flink.apache.org
Subject: Re: Flink Operator Resources Requests and Limits
Hi James,
Have you considered using pod templates already?
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource
We are currently evaluating the apache flink operator (version 1.1.0) to
replace the operator that we currently use. Setting the memory and cpu
resources sets both the request and the limit for the pod. Previously, we were
only setting request allowing pods to oversubscribe to CPU when needed
We are in the process of upgrading from Flink 1.9.3 to 1.13.3. We have noticed
that statements with either where UPPER(field) or LOWER(field) in combination
with an IN do not always evaluate correctly.
The following test case highlights this problem.
import org.apache.flink.streaming.api.data
I have the following SSL configuration for a 3 node HA flink cluster:
#taskmanager.data.ssl.enabled: false
security.ssl.enabled: true
security.ssl.keystore: /opt/app/certificates/server-keystore.jks
security.ssl.keystore-password:
security.ssl.key-password:
security.ssl.truststore: /opt/app/cert
: Flink - Nifi Connectors - Class not found
Hi,
the problem is that Flink's YARN code is not available in the Hadoop 1.2.1
build.
How do you try to execute the Flink job to trigger this error message?
On Fri, Nov 11, 2016 at 12:23 PM, PACE, JAMES
mailto:jp4...@att.com>> wrote:
I am run
I am running Apache Flink 1.1.3 - Hadoop version 1.2.1 with the NiFi connector.
When I run a program with a single NiFi Source, I receive the following Stack
trace in the logs:
2016-11-11 19:28:25,661 WARN org.apache.flink.client.CliFrontend
- Unable to locate custom CLI class