Hi everyone,
I am using Apache Flink to process a stream of data and I need to share an
index between all the nodes that process the input data. The index is
getting updated by the nodes frequently.
I would like to know, is it a good practice, from the point of efficiency,
to share the Dataset thr
for me .
>
> Thanks
> Ashutosh
>
> On Wed, Jun 1, 2016 at 3:37 PM, ahmad Sa P wrote:
>
>> I did test it with Kafka 0.9.0.1, still the problem exists!
>>
>> On Wed, Jun 1, 2016 at 11:50 AM, Aljoscha Krettek
>> wrote:
>>
>>> The Flink Kafka Consumer
I run it in Eclipse IDE,
On Wed, Jun 1, 2016 at 12:37 PM, Ashutosh Kumar
wrote:
> How are you packaging and deploying your jar ? I have tested with flink
> and kafka .9 . It works fine for me .
>
> Thanks
> Ashutosh
>
> On Wed, Jun 1, 2016 at 3:37 PM, ahmad Sa P wrote:
&g
for that.
>
> On Wed, 1 Jun 2016 at 10:47 ahmad Sa P wrote:
>
>> Hi Aljoscha,
>> I have tried different version of Flink V 1.0.0 and 1.0.3 and Kafka
>> version 0.10.0.0.
>> Ahmad
>>
>>
>>
>> On Wed, Jun 1, 2016 at 10:39 AM, Aljoscha Krettek
&g
and Kafka are you using?
>
>
>
> On Wed, 1 Jun 2016 at 07:02 arpit srivastava wrote:
>
>> Flink uses kryo serialization which doesn't support joda time object
>> serialization.
>>
>> Use java.util.date or you have to change kryo.
>>
>> T
Hi
I have a problem at running a sample code from the hands-in examples of
Apache Flink,
I used the following code to send output of a stream to already running
Apache Kafka, and get the below error. Could anyone tell me what is going
wrong?
Best regards
Ahmad
public class RideCleansing {
p