I'm taking a look at:
- https://issues.apache.org/jira/browse/FLINK-28171
which might be related although it considers Istio instead of Linkerd.
In general it would be nice to have some general guidance on how to
setup/deploy flink jobs to k8s in presence of a service mesh.
Regards,
Salva
On 2
All - I am trying to upgrade my pyflink installation from 1.15.2 to 1.16.0.
Could someone tell me if that's possible via the
'setup-pyflink-virtual-env.sh 1.16.0' command? I don't want to overwrite
any of my configuration files, so what's the clean way of upgrading to
1.16.0?
Thanks for your help
Hi Martijn,
Thanks for the link, it worked. However, I don't think I quite got that
from the documentation alone. Basically, I downloaded the Hadoop binaries
from the official website, unpacked, ran `hadoop classpath` from within the
extracted bin directory, and added the output as the environment
Hi Gordon,
I seem to remember you talking about these helper functions, to poll and write
to Kinesis, as part of your StateFun shopping cart demo.
But I didn’t see them anywhere…was I imagining things?
Thanks,
— Ken
--
Ken Krugler
http://www.scaleunlimited.com
Custom b
I've been deploying a job without issues until injecting the linkerd proxy
in order to access some services under a service mesh. In particular, I'm
receiving the following error on the job manager side:
```
java.lang.ClassNotFoundException: a.class.of.Mine
```
On the taskamanager side, it cannot
Hi Raihan,
This is documented at
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/configuration/advanced/#hadoop-dependencies
Best regards,
Martijn
On Mon, Jan 9, 2023 at 10:40 AM Raihan Sunny wrote:
> Hi,
>
> I'm trying to integrate HDFS in Flink for checkpoint and savepoint
> s
Hi,
I'm trying to integrate HDFS in Flink for checkpoint and savepoint storage.
I have an HDFS cluster running on Docker. I have made the following changes
to the flink configuration:
state.backend: filesystem
state.savepoints.dir: hdfs://namenode:9000/user/root/savepoints
state.checkpoints.dir:
Hi Puneet,
I see that you're running Flink 1.12; that version is no longer supported
by the Flink community. Please check again with the latest Flink version.
Best regards,
Martijn
On Tue, Jan 3, 2023 at 3:44 PM Puneet Duggal
wrote:
> Hi,
>
> During recent Load test of CEP Operator, even thou
Hi Prasanna,
There is no support for compression in operator state. This can be tracked
under https://issues.apache.org/jira/browse/FLINK-30113
Best regards,
Martijn
On Fri, Jan 6, 2023 at 7:53 AM Prasanna kumar
wrote:
> Hello Flink Community ,
>
>
>
> We are running Jobs in flink version 1.1