ure.
>>
>>
>>
>> Please let me know if you have some comments or suggests for me.
>>
>> Thanks.
>>
>>
>>
>>
>>
>>
>>
>> Best regards
>>
>> Hawin
>>
>>
>>
>> *From:* Márton Balass
awin
>
>
>
> *From:* Márton Balassi [mailto:balassi.mar...@gmail.com]
> *Sent:* Sunday, June 28, 2015 9:09 PM
> *To:* user@flink.apache.org
> *Subject:* Re: Best way to write data to HDFS by Flink
>
>
>
> Dear Hawin,
>
>
>
> As for your issues with runnin
:09 PM
To: user@flink.apache.org
Subject: Re: Best way to write data to HDFS by Flink
Dear Hawin,
As for your issues with running the Flink Kafka examples: are those resolved
with Aljoscha's comment in the other thread? :)
Best,
Marton
On Fri, Jun 26, 2015 at 8:40 AM, Hawin
Dear Hawin,
As for your issues with running the Flink Kafka examples: are those
resolved with Aljoscha's comment in the other thread? :)
Best,
Marton
On Fri, Jun 26, 2015 at 8:40 AM, Hawin Jiang wrote:
> Hi Stephan
>
> Yes, that is a great idea. if it is possible, I will try my best to
> co
Hi Stephan
Yes, that is a great idea. if it is possible, I will try my best to
contribute some codes to Flink.
But I have to run some flink examples first to understand Apache Flink.
I just run some kafka with flink examples. No examples working for me. I
am so sad right now.
I didn't get any
Hi Hawin!
If you are creating code for such an output into different
files/partitions, it would be amazing if you could contribute this code to
Flink.
It seems like a very common use case, so this functionality will be useful
to other user as well!
Greetings,
Stephan
On Tue, Jun 23, 2015 at 3:
Dear Hawin,
We do not have out of the box support for that, it is something you would
need to implement yourself in a custom SinkFunction.
Best,
Marton
On Mon, Jun 22, 2015 at 11:51 PM, Hawin Jiang wrote:
> Hi Marton
>
> if we received a huge data from kafka and wrote to HDFS immediately. W
Hi Marton
if we received a huge data from kafka and wrote to HDFS immediately. We
should use buffer timeout based on your URL
I am not sure you have flume experience. Flume can be configured buffer
size and partition as well.
What is the partition.
For example:
I want to write 1 minute buffer
Thanks Marton
I will use this code to implement my testing.
Best regards
Hawin
On Wed, Jun 10, 2015 at 1:30 AM, Márton Balassi
wrote:
> Dear Hawin,
>
> You can pass a hdfs path to DataStream's and DataSet's writeAsText and
> writeAsCsv methods.
> I assume that you are running a Streaming topo
Dear Hawin,
You can pass a hdfs path to DataStream's and DataSet's writeAsText and
writeAsCsv methods.
I assume that you are running a Streaming topology, because your source is
Kafka, so it would look like the following:
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnv
Hi All
Can someone tell me what is the best way to write data to HDFS when Flink
received data from Kafka?
Big thanks for your example.
Best regards
Hawin
11 matches
Mail list logo