Hello,
I have a question regarding windowing and triggering. I am trying to
connect the dots between the simple windowing api e.g.
stream.countWindow(1000, 100)
to the underlying representation using triggers and evictors api:
stream.window(GlobalWindows.create())
.evictor(CountEvictor.of(10
I guess my previous question is also asking if the parallelism is set
for the operator or "data stream". Is there implied repartitioning
when the parallelism changes?
On Fri, Aug 18, 2017 at 2:08 PM, Jerry Peng wrote:
> Hello all,
>
> I have a question about parallelism and
Hello all,
I have a question about parallelism and partitioning in the
DataStreams API. In Flink, a user can the parallelism of a data
source as well as operators. So when I set the parallelism of a data
source e.g.
DataStream text =
env.readTextFile(params.get("input")).setParallelism(5)
does
Hello,
I have a question regarding link streaming. I now if you enable
checkpointing you can have exactly once processing guarantee. If I do
not enable checkpointing what are the semantics of the processing? At
least once?
Best,
Jerry
Hello,
I have flink streaming job as follows
DataStream messageStream = env
.addSource(new FlinkKafkaConsumer082(
flinkParams.getRequired("topic"),
new SimpleStringSchema(),
flinkParams.getProperties())).setParallelism(5);
DataStream> messa
Hello,
I am trying to do some benchmark testing with flink streaming. When flink
reads a message in from Kafka, I want to write a timestamp to redis. How
can I modify the existing kafka consumer code to do this? What would be
easiest way to do something like this? Thanks for your help!
Best,
Hello,
So I am trying to use jedis (redis java client) with Flink streaming api,
but I get an exception:
org.apache.flink.client.program.ProgramInvocationException: The main method
caused an error.
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:452)
at
or
Hello,
Does a flink currently support operators to use redis? If I using the
streaming api in Flink and I need to look up something in a redis database
during the processing of the stream how can I do that?
Hello,
Does flink require hdfs to run? I know you can use hdfs to checkpoint and
process files in a distributed fashion. So can flink run standalone
without hdfs?
Hello,
When I try to run a storm topology with a Kafka Spout on top of Flink, I
get an NPE at:
15:00:32,853 ERROR org.apache.flink.streaming.runtime.tasks.StreamTask
- Error closing stream operators after an exception.
java.lang.NullPointerException
at storm.kafka.KafkaSpout.close(KafkaSp
gt;
> > If you want to observe the progress, look here:
> > https://issues.apache.org/jira/browse/FLINK-2111
> > and
> > https://issues.apache.org/jira/browse/FLINK-2338
> >
> > This PR, resolves both and fixed the problem you observed:
> > https://github.com/
es to
> "kill" the job. However, because the job was never started, there is a
> NotAliveException which in print to stdout.
>
> -Matthias
>
>
>
> On 09/01/2015 10:26 PM, Jerry Peng wrote:
> > When I run WordCount-StormTopology I get the following excep
use? In flink-0.10-SNAPSHOT it is
> located in submodule "flink-connector-kafka" (which is submodule of
> "flink-streaming-connector-parent" -- which is submodule of
> "flink-streamping-parent").
>
>
> -Matthias
>
>
> On 09/01/2015 09:40 PM, Jerry Pe
Hello,
I have some questions regarding how to run one of the flink-storm-examples,
the WordCountTopology. How should I run the job? On github its says I
should just execute
bin/flink run example.jar but when I execute:
bin/flink run WordCount-StormTopology.jar
nothing happens. What am I doing
14 matches
Mail list logo