Hi Nawaz,
>> My concern is, as Flink does not support dynamic windows, is this
approach going against Flink Architecture.
Per my understanding, the session window could be seen as a kind of dynamic
window. Besides, Flink also supports user-defined window with which users
should also be able to imp
Thanks, awesome! :-)
On Wed, May 17, 2023 at 2:24 PM Gyula Fóra wrote:
> The Apache Flink community is very happy to announce the release of Apache
> Flink Kubernetes Operator 1.5.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache Flink
> applications and their lifecycle th
Hi Derocco,
Good to hear that it is working. Let me create a Jira ticket and update the
document.
-Surendra
On Wed, May 17, 2023 at 9:29 PM DEROCCO, CHRISTOPHER wrote:
> Surendra,
>
>
>
> Your recommended config change fixed my issue. Azure Managed Service
> Identity works for me now and I ca
Surendra,
Your recommended config change fixed my issue. Azure Managed Service Identity
works for me now and I can write checkpoints to ADLSGen2 storage. My client id
is the managed identity that is attached to the azure Kubernetes nodepools. For
anyone else facing this issue, my configurations
Ivan,
How did you use Azure Key Vault with CSI because the flink operator uses a
configmap and not a Kubernetes secret to create the flink-conf file? I have
also tried using pod-identities as well as the new workload identity
(https://learn.microsoft.com/en-us/azure/aks/workload-identity-overvi
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.5.0.
The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.
Release highlights:
- Autoscaler improveme
Hi All!
We are encountering an error on a larger stateful job (around 1 TB + state)
on restore from a rocksdb checkpoint. The taskmanagers keep crashing with a
segfault coming from the rocksdb native logic and seem to be related to the
FlinkCompactionFilter mechanism.
The gist with the full error
Hello,
Looks like there is a bug with Flink 1.16's IF operator. If I use UPPER or
TRIM functions(there might be more such functions), I am getting the
exception. These functions used to work fine with Flink 1.13.
select
if(
address_id = 'a',
'default',
upper(address_id)
) as addres
Flink write data to redis,how to assure data accuracy ,for example i wang to save several months data every user,,if use state,first save to the state, then just set to redis,but if user data is too large ,i will need much memory ,is there better way?
Thanks in advance
Kobe24