Hi Elias,
You can do it with 1.3 and IterativeConditions. Method
ctx.getEventsForPattern("foo") returns only those events that were matched
in "foo" pattern in that particular branch.
I mean that for a sequence like (type =1, value_b = X); (type=1,
value_b=Y); (type=2, value_b=X) both events of ty
Hi, all
I use /bin/flink run -m yarn-cluster
commit my flink job. But, after this, the process which name is CliFrontend
is running. After a duration, there are many CliFrontend run in my computer
which is no need. Has any good idea soft this solution.
Thanks!
There doesn't appear to be a way to join events across conditions using the
CEP library.
Consider events of the form (type, value_a, value_b) on a stream keyed by
the value_a field.
Under 1.2 you can create a pattern that for a given value_a, as specified
by the stream key, there is a match if an
Hi albert,
As I know, if the upstream data will be consumed by multiple consumers, it
will generate multiple subpartitions, and each subpartition will correspond to
one input channel consumer.So it is one-to-one correspondence among
subpartition -> subpartition view -> input channel.
cheers,
Hi, Flink newbie here.
I played with the API (built from GitHub master), I encountered some
issues but I am not sure if they are limitations or actually by
design:
1. the data stream reduce method does not take a
RichReduceFunction. The code compiles but throws runtime exception
when submitted
Hi Sathi,
I believe the issue is because you pushed the event into the stream and
then you started up a consumer app to start reading after that. If you push
an event into the kinesis stream prior to starting up a reader that sets
its initial stream position to LATEST, it will not read that record
Hi ,
I also had a question around how long is the data that you broadcast in a
stream that is not changing available in operator’s JVM …will it be as long as
the operator is alive.
What happens when a slot dies. Does the new slot automatically get aware of the
broadcasted data?
Thanks
Sathi
Fro
Hi Gordon,
That was a typo, as I was trying to mask off the stream name.. I still had
issues with using Latest as the initial stream position , I moved to using
AT_TIMESTAMP to solve it, it works fine now.
Thanks so much for your response.
Sathi
From: "Tzu-Li (Gordon) Tai"
Date: Sunday, April 2
Anyone?
On Fri, Apr 21, 2017 at 10:15 PM, Elias Levy
wrote:
> This is something that has come up before on the list, but in a different
> context. I have a need to rekey a stream but would prefer the stream to
> not be repartitioned. There is no gain to repartitioning, as the new
> partition k
I updated the code a little bit for clarity, now the line #56 mentioned in
my previous message is line #25.
In summary the error I'm getting is this:
---
Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException:
Cannot load user class: com.test.Test
ClassLoader info: URL ClassLoade
Hi Stefan,
Check the code here:
https://gist.github.com/17d82ee7dd921a0d649574a361cc017d , the output is at
the bottom of the page.
Here are the results of the additional tests you mentioned:
1. I was able to instantiate an inner class (Test$Foo) inside the Ignite
closure, no problem with that
2
Hello,
Is there a way Flink allow a (pipelined) subpartition to be consumed by
multiple consumers? If not, would it make more sense to implement it as
multiple input channels for a single subpartition or multiple subpartition
views for each input channel?
Any suggestion is appreciated.
Thanks
Hi guys,
I have a flink streaming job that reads from kafka, creates some statistics
increments and stores this in hbase (using normal puts).
I'm using fold function here of with window of few seconds.
My tests showed me that restoring state with window functions is not
exactly working how I expe
Perfect!
Thanks a lot for testing it Luis!
And keep us posted if you find anything else.
As you may have seen the CEP library is undergoing heavy refactoring for the
upcoming release.
Kostas
> On Apr 25, 2017, at 12:30 PM, Luis Lázaro wrote:
>
> Hi Aljoscha and Kostas, thanks in advance.
>
Thanks for your suggestions.
@Flavio
This is very similar to the code I use and yields basically the same problems.
The examples are based on flink-1.0-SNAPSHOT and avro-1.7.6. which is more than
three years old. Do you have a working setup with newer version of avro and
flink?
@Jörn
I tried t
Hi Aljoscha and Kostas, thanks in advance.
Kostas, i followed your recommendation and it seems to be working fine.
I did:
- upgrade to 1.3.-SNAPSHOT from master branch.
- try assign timestamp and emit watermarks using AscendingTimestampExtractor:
alerts are correct (do not process late events as
Hi,
I would expect that the local environment picks up the class path from the code
that launched it. So I think the question is what happens behind the scenes
when you call ignite.compute().broadcast(runnable); . Which classes are shipped
and how is the classpath build in the environment that
Hi Martin!
For an example of a source that acknowledges received messages, you could check
the MessageAcknowledgingSourceBase
and the MultipleIdsMessageAcknowledgingSourceBase that ship with Flink. I hope
this will give you some ideas.
Now for the Flink version on top of which to implement yo
Hey Andy!
I agree with Dawid. Jamie has some code available here:
https://github.com/dataArtisans/queryable-state-demo/blob/master/flink-state-server/src/main/scala/com/dataartisans/stateserver/server/FlinkStateServerController.scala
This returns the JSON objects that are used by the simple-json-
Hi Ufuk, thanks for coming back to me on this.
The records are 100 bytes in size, the benchmark being TeraSort, so that
should not be an issue. I have played around with the input size, and here
are my observations:
128 GiB input: 0 Spilling in Flink.
256 GiB input: 88 GiB Spilling in Flink (so 8
Hey David,
Did that work for you? If yes could you share an example. I have a similar
use case - need to get notified of an event NOT occurring within a specified
time window.
Thanks much!
Moiz
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.co
21 matches
Mail list logo