Hello,
I am trying to configure my Flink (1.18.1) jobs to store checkpoints on s3 but
I am getting the below error.
Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to create
checkpoint storage at checkpoint coordinator side.
...
Caused by: org.apache.flink.core.fs.UnsupportedFile
Unsubscribe
Hello,
I am trying to create an in-memory table in PyFlink to use as a staging table
after ingesting some data from Kafka but it doesn’t work as expected.
I have used the print connector which prints the results but I need to use a
similar connector that stores staging results. I have tried wi
gt;
> AFAIK, the error indicated your path was incorrect.
> your should use '/opt/flink/checkpoints/1875588e19b1d8709ee62be1cdcc' or
> 'file:///opt/flink/checkpoints/1875588e19b1d8709ee62be1cdcc' instead.
>
> Best.
> Jiadong.Lu
>
> On 5/18/24 2:37 AM,
Hi,
I am trying to test how the checkpoints work for restoring state, but not sure
how to run a new instance of a flink job, after I have cancelled it, using the
checkpoints which I store in the filesystem of the job manager, e.g.
/opt/flink/checkpoints.
I have tried passing the checkpoint as
Hi,
I have a PyFlink job that needs to read from a Kafka topic and the
communication with the Kafka broker requires SSL.
I have connected to the Kafka cluster with something like this using just
Python.
from confluent_kafka import Consumer, KafkaException, KafkaError
def get_config(bootstrap
I am running a Flink job locally using
python -m job.py
and it runs fine.
The job is:
calcul_count = t_env.execute_sql("""
SELECT
username,
COUNT(action) AS a_count
FROM kafka_logs
GROUP BY username
""")
with calcul_count.collect() as results:
for row in results:
print(row)
When i try to s
n3-dev && rm -rf
>>> /var/lib/apt/lists/*
>>> RUN ln -s /usr/bin/python3 /usr/bin/python
>>>
>>> # install PyFlink
>>>
>>> COPY apache-flink*.tar.gz /
>>> RUN pip3 install /apache-flink-libraries*.tar.gz && pip3 install
See the doc of Jar Dependencies Management
> <https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/python/dependency_management/>
> for more details.
>
> Best,
> Biao Geng
>
>
> Phil Stavridis mailto:phi...@gmail.com>> 于2024年4月10日周三
> 22:04写道:
>&g
heck the doc
> <https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/overview/#deployment-modes>
> for more details. In short, your yaml shows that you are using the session
> mode, which needs connector jar to generate job graph in the client side.
>
>
&
able API, you can have a look at this
> document:
> https://nightlies.apache.org/flink/flink-docs-master/api/python/examples/table/word_count.html
>
> Best,
> Biao Geng
>
>
> Phil Stavridis mailto:phi...@gmail.com>> 于2024年4月10日周三
> 03:10写道:
>> Hello,
>>
Hello,
I have set up Flink and Kafka containers using docker-compose, for testing how
Flink works for processing Kafka messages.
I primarily want to check how the Table API works but also how the Stream API
would process the Kafka messages.
I have included the main part of the docker-compose.ya
13 matches
Mail list logo