change commands in [2] to reflect that.
>
> Thanks
> Milinda
>
> [1]
> https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines
> [2] https://gist.github.com/jkreps/c7ddb4041ef62a900e6c
>
> On Tue, Mar 15, 2016 at 11:35 AM
; with more than 100 MB/s per from a broker.
>
>
> On Fri, Mar 11, 2016 at 9:33 AM, おぎばやしひろのり wrote:
>>
>> Aljoscha,
>>
>> Thank you for your response.
>>
>> I tried no JSON parsing and no sink (DiscardingSink) case. The
>> throughput was 8228msg/
rs, which itself can cause quite a
> slowdown. You could try:
>
> datastream.addSink(new DiscardingSink())
>
> which is a dummy sink that does nothing.
>
> Cheers,
> Aljoscha
>> On 08 Mar 2016, at 13:31, おぎばやしひろのり wrote:
>>
>> Stephan,
>>
>>
1 CPU is not so bad thinking of
Flink's scalability and fault tolerance.
Thank you for your advice.
Regards,
Hironori Ogibayashi
2016-02-26 21:46 GMT+09:00 おぎばやしひろのり :
> Stephan,
>
> Thank you for your quick response.
> I will try and post the result later.
>
> Regards,
&g
t; 2) Use a dummy sink (discarding) rather than elastic search, to see if that
> is limiting
> 3) Check the JSON parsing. Many JSON libraries are very CPU intensive and
> easily dominate the entire pipeline.
>
> Greetings,
> Stephan
>
>
> On Fri, Feb 26, 2016 at 11:
Hello,
I started evaluating Flink and tried simple performance test.
The result was just about 4000 messages/sec with 300% CPU usage. I
think this is quite low and wondering if it is a reasonable result.
If someone could check it, it would be great.
Here is the detail:
[servers]
- 3 Kafka broker