Hi team!https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/asyncio/Is it possible to use this for pyFlink?Or another asynchronous enrichment of an unordered data stream?
Hello flink team! How to properly accumulate streaming data into the avro file partition by the hour. My current implementation data from the data stream is converted to a table and it is saved in an avro file.Similar to this: t_env.execute_sql(""" CREATE TABLE mySink (
I found the problem.In the data stream I had an empty list, but not none (null) On 2021/12/08 13:11:31 Королькевич Михаил wrote:> Hello, Flink team!>> 1) Is it possible to save a python list to table from datastream?>> 2) and then save the accumulated data to avro file?>>
Hello, Flink team!1) Is it possible to save a python list to table from datastream?2) and then save the accumulated data to avro file? For example, my data stream has the type. Types.ROW_NAMED(['id', 'items'], [Types.STRING, Types.LIST(items_row)] ) items_row = Types.ROW_NAMED(field_names=['start'
be added to the PYTHONPAHT of both the local client and the remote python UDF worker. Hope this helps! Best,Shuiqiang [1] https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/cli/#submitting-pyflink-jobs Королькевич Михаил <mkorolkev...@yandex.ru> 于2021年12月3日周五 下午5:23写道:Hi Flink Team, I
Hi Flink Team, Im trying to implement app on pyflink. I would like to structure the directory as follows: flink_app/data_service/s3.pyfilesystem.pyvalidator/validator.pymetrics/statictic.pyquality.pycommon/constants.pymain.py <- e