Any insights on this?
Thanks,
Lawrence
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-do-I-ensure-binary-comparisons-are-being-used-tp10806p10819.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble
Seems like you didn't setup the correct scala SDK
best,
Kurt
On Mon, Jan 2, 2017 at 10:41 PM, Stephan Epping
wrote:
> Hi,
>
> I am getting this error running my tests with 1.1.4 inside intellij ide.
>
> java.lang.NoSuchMethodError: org.apache.flink.runtime.
> jobmanager.JobManager$.startJobMana
Domink,
This should work just as you expect. Maybe the output of the print is just
misleading you. The print() operation will still have a parallelism of two
but the flatMap() with have a parallelism of 16 and all data elements with
the same key will get routed to the same host.
Any sequence of
Hi Henri,
#1 - This is by design. Event time advances with the slowest input
source. If there are input sources that generate no data this is
indistinguishable from a slow source. Kafka topics where some partitions
receive no data are a problem in this regard -- but there isn't a simple
solutio
Hi Gwenhael,
I think what you actually want is to use the Apache Flink metrics
interface. See the following:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/metrics.html
Sending metrics to StatsD is supported out-of-the-box.
-Jamie
On Mon, Jan 2, 2017 at 1:34 AM, Gwenhael Pas
If there is never a gap between elements larger than the session gap -- the
window never ending would be the correct behavior. So, if this is the case
with some data stream I would not suggest to use session windows at all --
or I would use a smaller session gap.
Another alternative would be to u
Hi Govind,
In Flink 1.2 (feature complete, undergoing test) you will be able to scale
your jobs/operators up and down at will, however you'll have to build a
little tooling around it yourself and scale based on your own metrics. You
should be able to integrate this with Docker Swarm or Amazon aut
Hi,
I am getting this error running my tests with 1.1.4 inside intellij ide.
java.lang.NoSuchMethodError:
org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(Lorg/apache/flink/configuration/Configuration;Lakka/actor/ActorSystem;Lscala/Option;Lscala/Option;Ljava/lang/Class;Ljava
Hi Aljoscha,
thank you for having a look. Actually there is not too much code based on
timestamps:
stream
.keyBy("id")
.map(...)
.filter(...)
.map(...)
.keyBy("areaID")
.map(new KeyExtractor())
.keyBy("f1.areaID","f0.sinterval")
.window(TumblingEven
Hi,
We are using Flink 1.1.4 version.
There is possibly an issue with EventTimeSessionWindows where a gap is
specified for considering items in the same session. Here the logic is, if
two adjacent items have a difference in event timestamps of more than the
gap then the items are considered to b
Hi, and best wishes for the year to come :)
I'd like to be able to programmatically get the (live) values of accumulators
in order to send them using a statsd (or another) client in the JobManager of a
yarn-deployed application. I say live because I'd like to use that in streaming
(24/7) applic
Hi,
I have few questions related to Flink streaming. I am on 1.2-SNAPSHOT and
what I would like to accomplish is to have a stream that reads data from
multiple kafka topics, identifies user sessions, uses an external user user
profile to enrich the data, evaluates an script to produce session
aggr
12 matches
Mail list logo