Hi,
I am trying to execute few storm topologies using Flink, I have a question
related to the documentation,
Can anyone tell me which of the below code is correct,
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/storm_compatibility.html
https://ci.apache.org/projects/flink/fl
you're not using the latest 1.0-SNAPSHOT. Did you build from
>source? If so, you need to build again because the snapshot API has
>been updated recently.
>
>Best regards,
>Max
>
>On Thu, Dec 3, 2015 at 6:40 PM, Madhire, Naveen
> wrote:
>> Hi,
>>
>> I
Hi Max,
Yeah, I did route the ³count² bolt output to a file and I see the output.
I can see the Storm and Flink output matching.
However, I am not able to use the BoltFileSink class in the 1.0-SNAPSHOT
which I built. I think it¹s better to wait for a day for the Maven sync to
happen so that I can
Hi Max,
I forgot to include flink-storm-examples dependency in the application to
use BoltFileSink.
However, the file created by the BoltFileSink is empty. Is there any other
stuff which I need to do to write it into a file by using BoltFileSink?
I am using the same code what you mentioned,
bui
I am
>little puzzled what might go wrong in your setup. The program seems to
>be correct.
>
>
>-Matthias
>
>
>On 12/04/2015 08:55 PM, Madhire, Naveen wrote:
>> Hi Max,
>>
>> I forgot to include flink-storm-examples dependency in the application
&g
Hey All,
I am getting the below error while executing a simple Kafka-Flink Application.
java.lang.NoClassDefFoundError: org/apache/flink/api/java/operators/Keys
Below are the maven dependencies which I included in my application.
org.apache.kafka
kafka_2.9.1
0.8.2.0
commo
ients_2.10
- flink-connector-kafka --> flink-connector-kafka-0.8_2.10
That should do it.
FYI: You can also try to use the latest release candidate, described here:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-1-0-0-RC5-td10628.html
Gree
kka or Scala dependency.
My suspicion is that you have still some incorrect artifact in the
dependencies. Good thing is that these problems disappear when you can
reference a stable release.
Is it possible for you to share the complete dependency section of your project?
Thanks,
Stephan
On Fri
ndency because our
kafka connector will also pull also that kafka dependency.
On Fri, Mar 4, 2016 at 8:07 PM, Madhire, Naveen
mailto:naveen.madh...@capitalone.com>> wrote:
Here it is,
1.0.0
apacherelease
Apache release
https://repository.apache.org/c
Hi,
The EventTimeSourceFunction class is not present in 1.0.0 version, it was in
0.10.2
https://github.com/apache/flink/blob/release-0.10.2-rc2/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/EventTimeSourceFunction.java
I am trying to generate a stream from i
Hi,
We are trying to test the recovery mechanism of Flink with Kafka and HDFS sink
during failures.
I’ve killed the job after processing some messages and restarted the same job
again. Some of the messages I am seeing are processed more than once and not
following the exactly once semantics.
I checked the JIRA and looks like FLINK-2111 should address the issue which I
am facing. I am canceling the job from dashboard.
I am using kafka source and HDFS rolling sink.
https://issues.apache.org/jira/browse/FLINK-2111
Is this JIRA part of Flink 1.0.0?
Thanks,
Naveen
From: "Madhire, Ve
27;s checkpointing mechanism to
achieve exactly-once output. Otherwise, it might happen that results are
emitted multiple times.
Cheers, Fabian
2016-05-13 22:58 GMT+02:00 Madhire, Naveen
mailto:naveen.madh...@capitalone.com>>:
I checked the JIRA and looks like FLINK-2111 should address the issu
.
Did you see events being emitted multiple times (should not happen with the
RollingFileSink) or being processed multiple times within the Flink program
(might happen as explained before)?
Best, Fabian
2016-05-13 23:19 GMT+02:00 Madhire, Naveen
mailto:naveen.madh...@capitalone.com>>
/api/java/org/apache/flink/streaming/connectors/fs/RollingSink.html
2016-05-14 4:17 GMT+02:00 Madhire, Naveen
mailto:naveen.madh...@capitalone.com>>:
Thanks Fabian. Yes, I am seeing few records more than once in the output.
I am running the job and canceling it from the dashboard, and running agai
ert
On Tue, May 17, 2016 at 11:27 AM, Stephan Ewen
mailto:se...@apache.org>> wrote:
Hi Naveen!
I assume you are using Hadoop 2.7+? Then you should not see the ".valid-length"
file.
The fix you mentioned is part of later Flink releases (like 1.0.3)
Stephan
On Mon, May 16, 2016 at
rg/projects/flink/flink-docs-master/apis/streaming/savepoints.html
http://data-artisans.com/how-apache-flink-enables-new-streaming-applications/
Regards,
Robert
On Tue, May 17, 2016 at 4:25 PM, Madhire, Naveen
mailto:naveen.madh...@capitalone.com>> wrote:
Hey Robert,
What is the best way to stop t
17 matches
Mail list logo