Hi,
We have one table A in database. We are loading that table into flink using
Flink SQL JdbcCatalog.
Here is how we are loading the data
val catalog = new JdbcCatalog("my_catalog", "database_name", username,
password, url)
streamTableEnvironment.registerCatalog("my_catalog", catalog)
streamTab
Hi Harshit,
According to the stack you provided, I guess you define your Python
function in the main file, and the Python function imports xgboost
globally. The reason for the error is that the xgboost library is difficult
to be serialized by cloudpickle. There are two ways to solve
1. Move `impo
any suggestion is highly appreciated
On Tue, Nov 15, 2022 at 8:50 PM tao xiao wrote:
> Hi team,
>
> I have a Flink job that joins two streams, let's say A and B streams,
> followed by a key process function. In the key process function the job
> inserts elements from B stream to a list state if
Dear Team,
I am facing a issue while running pyflink program in flink cluster as it
stop running while reading the machine learning model
This is the error :
./bin/flink run --python /home/desktop/ProjectFiles/test_new.py
Job has been submitted with JobID 0a561cb330eeac5aa7b40ac047d3c