Hi Jerry,
unfortunately, it seems that the StormFileSpout can only read files from a
local filesystem, not from HDFS. Maybe Matthias has something in the works
for that.
Regards,
Aljoscha
On Tue, 1 Sep 2015 at 23:33 Jerry Peng wrote:
> Ya that what I did and everything seems execute fine but wh
Ya that what I did and everything seems execute fine but when I try to run
the WordCount-StormTopology with a file on hfs I get a
java.io.FileNotFoundException
:
java.lang.RuntimeException: java.io.FileNotFoundException:
/home/jerrypeng/hadoop/hadoop_dir/data/data.txt (No such file or directory)
You can use "bin/flink cancel JOBID" or JobManager WebUI to cancel the
running job.
The exception you see, occurs in FlinkSubmitter.killTopology(...) which
is not used by "bin/flink cancel" or JobMaanger WebUI.
If you compile the example you yourself, just remove the call to
killTopology().
-Mat
Oh yes. I forgot about this. I have already a fix for it in a pending
pull request... I hope that this PR is merged soon...
If you want to observe the progress, look here:
https://issues.apache.org/jira/browse/FLINK-2111
and
https://issues.apache.org/jira/browse/FLINK-2338
This PR, resolves both
Hello,
I corrected the number of slots for each task manager but now when I try to
run the WordCount-StormTopology, the job manager daemon on my master node
crashes and I get this exception in the log:
java.lang.Exception: Received a message
CancelJob(6a4b9aa01ec87db20060210e5b36065e) without a l
Yes. That is what I expected.
JobManager cannot start the job, due to less task slots. It logs the
exception NoResourceAvailableException (it is not shown in stdout; see
"log" folder). There is no feedback to Flink CLI that the job could not
be started.
Furthermore, WordCount-StormTopology sleeps
When I run WordCount-StormTopology I get the following exception:
~/flink/bin/flink run WordCount-StormTopology.jar
hdfs:///home/jerrypeng/hadoop/hadoop_dir/data/data.txt
hdfs:///home/jerrypeng/hadoop/hadoop_dir/data/results.txt
org.apache.flink.client.program.ProgramInvocationException: The main
Hi Jerry,
WordCount-StormTopology uses a hard coded dop of 4. If you start up
Flink in local mode (bin/start-local-streaming.sh), you need to increase
the number of task slots to at least 4 in conf/flink-conf.yaml before
starting Flink -> taskmanager.numberOfTaskSlots
You should actually see the
Concerning the KafkaSource, please use the "FlinkKafkaConsumer". Its the
new and better KafkaSource.
Am 01.09.2015 21:40 schrieb "Jerry Peng" :
> Hello,
>
> I have some questions regarding how to run one of the
> flink-storm-examples, the WordCountTopology. How should I run the job? On
> github
Hello,
I have some questions regarding how to run one of the flink-storm-examples,
the WordCountTopology. How should I run the job? On github its says I
should just execute
bin/flink run example.jar but when I execute:
bin/flink run WordCount-StormTopology.jar
nothing happens. What am I doing
While working on high availability (HA) for Flink's YARN execution I
stumbled across some limitations with Hadoop 2.2.0. From version 2.2.0 to
2.3.0, Hadoop introduced new functionality which is required for an
efficient HA implementation. Therefore, I was wondering whether there is
actually a need
Hi Rico,
unfortunately the 0.9 branch still seems to have problems with exactly once
processing and checkpointed operators. We reworked how the checkpoints are
handled for the 0.10 release so it should work well there.
Could you maybe try running on the 0.10-SNAPSHOT release and see if the
problem
Hi!
I still have an issue... I was now using 0.9.1 and the new
KafkaConnector. But I still get duplicates in my flink prog. Here's the
relevant part:
final FlinkKafkaConsumer082 kafkaSrc = new
FlinkKafkaConsumer082(
kafkaTopicIn, new SimpleStringSchema(), properties);
The Apache Flink community is pleased to announce the availability of the
0.9.1 release.
Apache Flink is an open source platform for scalable stream and batch data
processing. Flinkās core consists of a streaming dataflow engine that
provides data distribution, communication, and fault tolerance f
Hi Stefan,
Thanks for the advice. It works ...
Cheers. Rico.
> Am 31.08.2015 um 20:14 schrieb Stephan Ewen :
>
> Hey Rico!
>
> Parts of the "global windows" are still not super stable, and we are heavily
> reworking them for the 0.10 release.
>
> What you can try is reversing the order of
15 matches
Mail list logo