Hi,
Currently, Flink SQL does not support using side output for late data. For more
details, you can refer to FLINK-20527[1].
One potential workaround is to use the CURRENT_WATERMARK[2] function to filter
out late data in advance, and then handle it separately.
[1] https://issues.apache.org
How/what tools can we use to monitor directory usage?
On Thu, Aug 29, 2024 at 8:00 AM John Smith wrote:
> Also linger and batch is producer setting we are getting this error on
> consumers. In fact we don't use Kafka as a sink what so ever in D-Link.
>
> On Thu, Aug 29, 2024, 8:46 AM John Smith
Now I see what you're doing. In general you should look for the UID pair in
the vertex, like this (several operators can belong to a vertex):
Optional pair =
vertex.get().getOperatorIDs().stream()
.filter(o ->
o.getUserDefinedOperatorID().get().toString().equals(uidHash))
Hi Salva,
Which version is this?
BR,
G
On Mon, Sep 23, 2024 at 8:46 PM Salva Alcántara
wrote:
> I have a pipeline where I'm trying to add a new operator. The problem is
> that originally I forgot to specify the uid for one source. To remedy this,
> I'm using setUidHash, providing the operator