Hello Guowei,
The pods terminate almost within a second so am unable to pull any logs. Is
there any way I can pull the logs?
Thanks,
Hemant
On Tue, Sep 14, 2021 at 12:22 PM Guowei Ma wrote:
> Hi,
>
> Could you share some logs when the job fails?
>
> Best,
> Guowei
>
>
> On Mon, Sep 13, 2021 at
Hi,
Could you share some logs when the job fails?
Best,
Guowei
On Mon, Sep 13, 2021 at 10:59 PM bat man wrote:
> Hi,
>
> I am running a POC to evaluate Flink on Native Kubernetes. I tried
> changing the default log location by using the configuration -
> kubernetes.flink.log.dir
> However, th
Hi!
This is because Java has a maximum method length of 64 KB. For Flink <=
1.13 please set table.generated-code.max-length to less than 65536 (~8192
is preferred) to limit the length of each generated method.
If this doesn't help, we've (hopefully) completely fixed this issue in
Flink 1.14 by cr
I write a long sql,but when I explain my plan,it make a mistake:
org.apache.flink.util.FlinkRuntimeException:
org.apache.flink.api.common.InvalidProgramException: Table program cannot be
compiled. This is a bug. Please file an issue.
at
org.apache.flink.table.runtime.generated.CompileUtils.com
Hi!
This seems to be caused by some mismatching types in your source definition
and your workflow. If possible could you describe the schema of your Kafka
source and paste your datastream / Table / SQL code here?
Dhiru 于2021年9月14日周二 上午3:49写道:
> *I am not sure when we try to receive data from Ap
I am not sure when we try to receive data from Apache Kafka I get this error ,
but works good for me when I try to run via Conflunece kafka
java.lang.ClassCastException: class java.lang.String cannot be cast to class
scala.Product (java.lang.String is in module java.base of loader 'bootstrap';
Hi,
I am running a POC to evaluate Flink on Native Kubernetes. I tried changing
the default log location by using the configuration -
kubernetes.flink.log.dir
However, the job in application mode fails after bringing up the task
manager. This is the command I use -
./bin/flink run-application --t
Hi,
Thank you for quick reply. So in my case i am using Datastream Apis.Each job is
a real time processing engine which consumes data from kafka and performs some
processing on top of it before ingesting into sink.
JVM Metaspace size earlier set was around 256MB (default) which i had to
increa
Thanks for your replies Alexis and Guowei.
We're using 1.13.1 version of Flink, and using the DataStream API.
I'll try the savepoint, and take a look at that IO article, thank you.
Please let me know if anything else comes to mind!
On Mon, Sep 13, 2021 at 3:05 AM Alexis Sarda-Espinosa <
alexis.
Hi team
We have a job that uses value state with RocksDB and TTL set to 1 day. The
TTL update type is OnCreateAndWrite. We set the value state when the value
state doesn't exist and we never update it again after the state is not
empty. The key of the value state is timestamp. My understanding of
Hi!
Which API are you using? The datastream API or the Table / SQL API? If it
is the Table / SQL API then some Java classes for some operators (for
example aggregations, projection, filter, etc.) will be generated when
compiling user code to executable Java code. These Java classes are new to
the
Hi,
So on going through multiple resources, got basic idea that JVM Metaspace is
used by flink class loader to load class metadata which is used to create
objects in heap. Also this is a one time activity since all the objects of
single class require single class metadata object in JVM Metaspac
Hi!
I struggling with finding the answer for the question if this is
possible to connect Fink to Zookeeper cluster secured with TLS certificate?
All the Best,
Beata Sz.
Hello Seth,
Thank you very much for your reply. I've taken a look at MATCH_RECOGNIZE
but I have the following doubt. Can I implement a state machine that detect
patterns with multiple end states?
To give you a concrete example:
I'm trying to count the number of *Jobs* that have been *cancelled* a
hello,
here's some of full GC log:
OpenJDK 64-Bit Server VM (25.232-b09) for linux-amd64 JRE (1.8.0_232-b09),
built on Oct 18 2019 15:04:46 by "jenkins" with gcc 4.8.2 20140120 (Red Hat
4.8.2-15)
Memory: 4k page, physical 976560k(946672k free), swap 0k(0k free)
CommandLine flags: -XX:Compresse
I'm not very knowledgeable when it comes to Linux memory management, but do
note that Linux (and by extension Kubernetes) takes disk IO into account when
deciding whether a process is using more memory than it's allowed to, see e.g.
https://faun.pub/how-much-is-too-much-the-linux-oomkiller-and-u
16 matches
Mail list logo