Yes, Flink jobs are deployed using `./bin/flink run`. It will use the
configuration in conf/flink-conf.yaml to connect to the Flink cluster.
It looks like you don't have the right dependencies loaded onto your
classpath. Have you had a look at the documentation about project
configuration [1]? Thi
Hi,
I have 2 streams one event data and the other rules. I broadcast the rules
stream and then key the data stream on event type. The connected stream is
processed thereafter.
We faced an issue where the rules data in the topic got deleted because of
Kafka retention policy.
Post this the existing
Hi Yidan,
Thank you for your reply. I was wondering if there is some way that the
process function can kiw which condition fired the trigger.
Eg: If I set trigger to fire when he object associated with key have value
2, 8, 10 (3 conditions for the trigger to fire), then if he process
function, I
Hello,
I'm trying to use a custom trigger for one of my use case. I have a basic
logic (as shown below) of using keyBy on the input stream and using a
window of 1 min.
.keyBy()
.window(TumblingEventTimeWindows.of(Time.seconds(60)))
.trigger(new CustomTrigger())
.aggregate(Input.getAggregationFunc
A dedicated Slack would be awesome.
On Mon, Feb 22, 2021, 22:57 Sebastián Magrí wrote:
> Is there any chat from the community?
>
> I saw the freenode channel but it's pretty dead.
>
> A lot of the time a more chat alike venue where to discuss stuff
> synchronously or just share ideas turns out v
Is there any chat from the community?
I saw the freenode channel but it's pretty dead.
A lot of the time a more chat alike venue where to discuss stuff
synchronously or just share ideas turns out very useful and estimulates the
community.
--
Sebastián Ramírez Magrí
Hello,
Is there a julia api or interface for using flink?
Thanks in advance for any response.
Beni
Can i set the watermark strategy to bounded out of orderness when using the
table api and sql DDL to assign watermarks?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
My customer wants us to install this package in our Flink Cluster:
https://github.com/twitter/AnomalyDetection
One of our engineers developed a python version:
https://pypi.org/project/streaming-anomaly-detection/
Is there a way to install this in our cluster?
--
Robert Cullen
240-475-4490
Hi,
running your job from within your IDE with no specific configuration
provided (like the Flink job examples provided by the Flink [1]) means that
you spin up a local Flink cluster (see MiniCluster [2]). This does not have
the web UI enabled by default. You could enable it by calling
`StreamExecu
I'm trying to calculate a simple rolling average using pyflink, but somehow
the last rows streaming in seem to be excluded, which i expected to be the
result of data arriving out of order. However i fail to understand why.
exec_env = StreamExecutionEnvironment.get_execution_environment()
exec_env
Hi Miguel,
I think that there are a couple of ways to achieve this, and it really
depends on your specific use case, and the trade-offs
that you are willing to accept.
For example, one way to approach this:
- Suppose you have an external service somewhere that returns a
representation of the logi
Thanks Chesnay, that answers my question.
In my case NextOp is operating on keyed streams and now it makes sense to me
that along with key re-distribution, the state will also be re-distributed so
effectively the ‘NextOp4’ instance can process all the tuples together for key
‘A’, those that wer
Flink IT tests covers queryable state with mini cluster.
All tests:
https://github.com/apache/flink/tree/5c772f0a9cb5f8ac8e3f850a0278a75c5fa059d5/flink-queryable-state/flink-queryable-state-runtime/src/test/java/org/apache/flink/queryablestate/itcases
Setup/Configs:
https://github.com/apache/flin
Let's clarify what NextOp is.
Is it operating on a keyed stream, or is something like a map function
that has some internal state based on keys of the input elements (e.g.,
it has something like a Map that it queries/modifies for
each input element)?
If NextOp operators on a keyed stream then
Just needed more clarity in terms of a processing scenario.
Say, I was having records of key ‘A’ on a parallel instance ‘Op1’ of operator
‘Op’ and the next operator ‘NextOp’ in the sequence of transformation was
getting records of key ‘A’ on it’s parallel instance ‘NextOp2’ at the time when
the
Hi Till,
Thanks for the feedback.
My use case is a little bit more tricky as I can’t key all the streams by the
same field.
Basically I’m trying to solve Continuous SPARQL queries, which consist of many
joins. I’ve seen that SPARQL queries over RDF data has been discussed before on
the mailin
Thanks a lot Timo!
On Mon, 22 Feb 2021 at 08:19, Timo Walther wrote:
> Yes, this is possible. You can run `tableEnv.sqlQuery("...").explain()`
> or `tableEnv.toRetractStream(table)` which would trigger the complete
> translation of the SQL query without executing it.
>
> Regards,
> Timo
>
> On 2
Yes, this is possible. You can run `tableEnv.sqlQuery("...").explain()`
or `tableEnv.toRetractStream(table)` which would trigger the complete
translation of the SQL query without executing it.
Regards,
Timo
On 20.02.21 18:46, Sebastián Magrí wrote:
I mean the SQL queries being validated when I
19 matches
Mail list logo