Hi ,
Im using flink 1.15, i mostly use datastream api, do you suggest i use row
or genericrowdata that inherit rowdata for my data structure?
Thx
Hi all, is there any reason to not use rowdata in the streaming api?, i
usualy use row type, any limitation after to go from and to table?
Eric
2021年2月26日,23:41,Matthias Pohl 写道:
>>
>> Hi Eric,
>> it looks like you ran into FLINK-19830 [1]. I'm gonna add Leonard to the
>> thread. Maybe, he has a workaround for your case.
>>
>> Best,
>> Matthias
>>
>> [1] https://issues.apache.or
]. I'm gonna add Leonard to the
> thread. Maybe, he has a workaround for your case.
>
> Best,
> Matthias
>
> [1] https://issues.apache.org/jira/browse/FLINK-19830
>
> On Fri, Feb 26, 2021 at 11:40 AM eric hoffmann
> wrote:
>
>> Hello
>> Working with fl
Hello
Working with flink 1.12.1 i read in the doc that Processing-time temporal
join is supported for kv like join but when i try i get a:
Exception in thread "main" org.apache.flink.table.api.TableException:
Processing-time temporal join is not supported yet.
at
org.apache.flink.table.pla
Hi,
Is there any overhead to go from stream to table for sql query execution and go
back to stream for sink?
or is it better to do full streaming tabke/sql api (with table source and table
sink)?
Thx
eric
This message contains confidential information and is intended only for the
individual(s)
Hi, i use a jobcluster (1 manager and 1 worker) in kubernetes for streaming
application, i would like to have the lightest possible solution, is it
possible to use a localenvironment (manager and worker embeded) and still have
HA with zookeeper in this mode?, I mean kubernetes will restart the j
Hi,
In a Kubernetes delpoyment, im not able to display metrics in the dashboard, I
try to expose and fix the metrics.internal.query-service.port variable
But nothing. Do you have any ideas?
Thx
Eric
Hi
Is it possible to call batch job on a streaming context?
what i want to do is:
for a given input event, fetch cassandra elements based on event data,
apply transformation on them and apply a ranking when all elements fetched
by cassandra are processed.
If i do this in batch mode i would have to
Hi.
I need to compute an euclidian distance between an input Vector and a full
dataset stored in Cassandra and keep the n lowest value. The Cassandra
dataset is evolving (mutable). I could do this on a batch job, but i will
have to triger it each time and the input are more like a slow stream, but
10 matches
Mail list logo