Hi,Community:
Are there any approaches to add with clause to flink table definition in
some way of dynamic?
Thanks.
Hi,Community:
Are there any approaches to add with clause to flink table
definition in some way of dynamic?
Thanks.
Unsubscribe
- 原始邮件 -
发件人:Ganesh Walse
收件人:user@flink.apache.org
主题:Hikari data source and jdbc connection issue
日期:2024年09月04日 09点05分
Hi All,
I am using Hikari data source with ojdbc 8 to connect with database in flink
application.
I have series of jobs which other java application submi
Unsubscribe
Hi Yaroslav,
Thank you for your mail.
I have tried by removing ojdbc 8 jar from application and keeping it in
flink lib folder but that doesn’t help.
I debugged code and got the root cause that whenever I am making connection
with database at that time metaspcae increases by 10mb.
On Fri, 23 Au
Hi All,
I am using Hikari data source with ojdbc 8 to connect with database in
flink application.
I have series of jobs which other java application submit to my jobmanager.
And on each job submission I will connect to database after that my job
manager and task manager metaspace increases by 10m
Hi All,
Whenever I scale down my job manager pods , my application jar gets deleted.
And after that I scaled up the pod but jar is not getting uploaded again.
Any help would be appreciated.
Thanks and regards,
Ganesh walse
Unsubscribe
Hi All,
Whenever I scale down my job manager pods , my application jar gets deleted.
And after that I scaled up the pod but jar is not getting uploaded again.
Any help would be appreciated.
Thanks and regards,
Ganesh walse
Hi All,
I am using Hikari data source with ojdbc 8 to connect with database in
flink application.
I have series of jobs which other java application submit to my jobmanager.
And on each job submission I will connect to database after that my job
manager and task manager metaspace increases by 10m
Hi, We’ve tried to restart with savepoint 2 different jobs:FlinkKafkaProducer -> KafkaSink with a new UID on it and –allowNonRestoredState flag to reset the state of the sink operator.KafkaSink -> KafkaSink with a new UID on it and –allowNonRestoredState flag to reset the state of the sink operator
Hi Dominic,
There is not much possibility where such libs live.
Either the application packages it or it lives under the Flink distro root
directory (lib or plugins).
BR,
G
On Tue, Sep 3, 2024 at 1:17 PM wrote:
> Hi Gabor
>
>
>
> There should have never been a dependency to the old connector
Hi Gabor
There should have never been a dependency to the old connector (or a remaining
state) as I removed everything before deploying a new version. That’s where my
confusion is coming from. It crashes when deploying two times the same pipeline
with the same 3.2.0 dependency when reloading fr
Hi Dominic,
The issue has nothing to do with DynamicKafkaSource.
The scenario what happens here is clear:
* At some point in time you've used the 3.2 Kafka connector which writes
out it's state with v2 serializer
* Later falled back to a version which is not able to read that (pre 3.1.0
because t
Hi Alexandre,
This seems to be complaining about the python script loading. It seems that
the local file system is using `file` file prefix not `local`[1].
FYI inside your python script you can add more dependencies like connectors
and so using python dependency management[2] which differs from Jav
Did you get the indication via the line number that matches the implementation
in 3.0.2?
I’ll have to check, I cannot find it anywhere in the classpath and we’re not
using fat jars in app mode. But I see where this is heading. Thanks for
mentioning!
Best,
Dominik Bünzli
Data, Analytics & AI E
… really hard to tell, your original error message clearly indicated that an
older JAR (3.0.x, or pre) was involved, assuming it was somewhere in the
classpath …
… did you maybe shade this into your jar?
Thias
From: dominik.buen...@swisscom.com
Sent: Tuesday, September 3, 2024 10:23 AM
To: Sc
Hi Matthias,
Thank you for your reply!
There should not be a dependency for 3.0.x in my docker image, I only add 3.2.0
explicitly. When connecting to the running container I also can’t find any
reference to 3.0.x. I reverted the dependency to 3.0.0-1.17 and it works again.
Could it be related
Hi Dominik,
No clue why this happens, but it looks like that
when restarting from the savepoint it uses the flink-connector-kafka version
from your docker image (3.0.x ?) instead of the newer one you configured.
How did you integrate the newer version?
Thias
From: dominik.buen...@swisscom.co
19 matches
Mail list logo