Hi Qingsheng,
I am creating BigQuery read sessions in the factory class level. I want to
set the number of Streams in the read session using the parallelism set at
Execution environment.
This is to execute each stream by each executor to achieve parallel
execution of streams.
Thanks and Regards,
Hi John,
1) Regarding to Table API, you could declare the column `detail` as STRING
and then parse it into a json in the Python use-defined function as
following:
```
@udf(result_type=DataTypes.STRING())
def get_id(detail):
detail_json = json.loads(detail)
if 'build-id' in detail_json:
Hi Shameet,
The reason should be that it adds quotes around string data according to
the length by default. You could disable the quotes using option
csv-disable-quote-character [1]. However, there are still no options to
configure it to always add quotes around string data. If that's your
require
TLDR; I want to know how best to process a stream of events using PyFlink,
where the events in the stream have a number of different schemas.
Details:
I want to process a stream of events coming from a Kinesis data stream which
originate from an AWS EventBridge bus. The events in this stream ar
Hello,
We have KeyedBroadcastProcessFunction with broadcast state
MapStateDescriptor, where PbCfgTenantDictionary
is Protobuf type, for which we custom TypeInformation/TypeSerializer. In one of
environment, we can't restore job from savepoint because seems state data is
corrupted. I've added to
Hi:
I am working with Stateful Functions 3.2.0 using Java SDK.
I wanted to find out if there is a broadcast an event to all functions
functionality available in Stateful Functions just like broadcast process and
keyed broadcast processing in Apache Flink.
Also, how would we implement processing t