Thanks Timo
That is a good interview question
Best regards
Hawin
On Thu, Aug 13, 2015 at 1:11 AM, Michael Huelfenhaus <
m.huelfenh...@davengo.com> wrote:
> Hey Timo,
>
> yes that is what I needed to know.
>
> Thanks
> - Michael
>
> Am 12.08.2015 um 12:44 schr
Great job, Guys
Let me read it carefully.
On Wed, Aug 5, 2015 at 7:25 AM, Stephan Ewen wrote:
> I forgot the link ;-)
>
>
> http://data-artisans.com/high-throughput-low-latency-and-exactly-once-stream-processing-with-apache-flink/
>
> On Wed, Aug 5, 2015 at 4:11 PM, Stephan Ewen wrote:
>
org.apache.kafka
kafka-clients
0.8.2.1
**
Best regards
Hawin
-Original Message-
From: Wendong [mailto:wendong@gmail.com]
Sent: Tuesday, July
Maybe you can posted your pom.xml file to identify your issue.
Best regards
Hawin
On Tue, Jul 21, 2015 at 2:57 PM, Wendong wrote:
> also tried using zkclient-0.3.jar in lib/, updated build.sbt and rebuild.
> It
> doesn't help. Still got the same error of NoClassDefFoundError:
all workloads.
I hope I can find a good one for our enterprise application. I will let
you know if I can move forward this.
Good Night.
Best regards
Hawin
On Wed, Jul 15, 2015 at 9:30 AM, George Porter wrote:
> Hi Hawin,
>
> We used varying numbers of the i2.8xlarge servers, dependi
/BigDataBench/industry-standard-benchmarks/)
Maybe we can test TeraSort to see the performance is better than your
record or not.
Please let me know if you have any comments.
Thanks for the support.
Best regards
Hawin
On Tue, Jul 14, 2015 at 9:42 AM, Mike Conley wrote:
> George is corr
Hi Slim
I will follow this and keep you posted.
Thanks.
Best regards
Hawin
On Mon, Jul 13, 2015 at 7:04 PM, Slim Baltagi wrote:
> Hi
>
> BigDataBench is an open source Big Data Benchmarking suite from both
> industry and academia. As a subset of BigDataBench, BigDataB
your records. But we don't have that much servers for
testing.
Please let me know if you can help us or not.
Thank you very much.
Best regards
Hawin
ne image 2]
72.93GB/sec = (1000TB*1024) / (234min*60)
The performance test report from Databricks.
https://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html
Best regards
Hawin
On Fri, Jul 10, 2015 at 1:33 AM, Stephan Ewen wrote:
> Hi Dongwon K
Hi Stephan
Yes. You are correct. It looks like the TPCx-HS is an industry standard
for big data. But how to get a Flink number on that.
I think it is also difficult to get a Spark performance number based on
TPCx-HS.
if you know someone can provide servers for performance testing. I would
like t
Hi Slim and Fabian
Here is the Spark benchmark. https://amplab.cs.berkeley.edu/benchmark/
Do we have s similar report or comparison like that.
Thanks.
Best regards
Hawin
On Mon, Jul 6, 2015 at 6:32 AM, Slim Baltagi wrote:
> Hi Fabian
>
> > I could not find which versions
/year/month/day/hour folder. I think that folder structure is good
for flink table API in the future.
Please let me know if you have some comments or suggests for me.
Thanks.
Best regards
Hawin
From: Márton Balassi [mailto:balassi.mar...@gmail.com]
Sent: Sunday, June 28, 2015 9
Hi Aljoscha
You are the best.
Thank you very much.
Right now, It is working now.
Best regards
Hawin
On Fri, Jun 26, 2015 at 12:28 AM, Aljoscha Krettek
wrote:
> Hi,
> could you please try replacing JavaDefaultStringSchema() with
> SimpleStringSchema() in your first example. The
t get any troubles to run kafka examples from *kafka*.apache.org so
far.
Please suggest me.
Thanks.
Best regards
Hawin
On Wed, Jun 24, 2015 at 1:02 AM, Stephan Ewen wrote:
> Hi Hawin!
>
> If you are creating code for such an output into different
> files/partitions, it would be ama
$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
On Thu, Jun 25, 2015 at 11:06 PM, Hawin Jiang wrote:
> Dear Marton
>
> I have upgraded my Flink to 0.9.
s.deserialize(SerializationUtils.java:222)
... 8 more
On Tue, Jun 23, 2015 at 6:31 AM, Márton Balassi
wrote:
> Dear Hawin,
>
> Sorry, I ahve managed to link to a pom that has been changed in the
> meantime. But we have added a section to our doc clarifying your question.
> [1] Since then Stepha
HI Flavio
Here is the example from Marton:
You can used env.writeAsText method directly.
StreamExecutionEnvironment env = StreamExecutionEnvironment.
getExecutionEnvironment();
env.addSource(PerisitentKafkaSource(..))
.map(/* do you operations*/)
.wirteAsText("hdfs://:/path/to/your/file
provided a similar partition API or configuration
for this.
Thanks.
Best regards
Hawin
On Wed, Jun 10, 2015 at 10:31 AM, Hawin Jiang wrote:
> Thanks Marton
> I will use this code to implement my testing.
>
>
>
> Best regards
> Hawin
>
> On Wed, Jun 10, 2015 at 1:30
-dataflow/blob/master/pom.xml#L286-296
>
> On Thu, Jun 11, 2015 at 10:32 AM, Hawin Jiang
> wrote:
>
>> Dear Marton
>>
>> What do you meaning for locally Eclipse with 'Run'.
>> Do you want to me to run it on Namenode?
>> But my namenode didn't in
árton Balassi
wrote:
> Dear Hawin,
>
> No problem, I am gald that you are giving our Kafka connector a try. :)
> The dependencies listed look good. Can you run the example locally from
> Eclipse with 'Run'? I suspect that maybe your Flink cluster does not have
> the acc
*
Best regards
Email: hawin.ji...@gmail.com
From: Márton Balassi [mailto:balassi.mar...@gmail.com]
Sent: Thursday, June 11, 2015 12:58 AM
To: user@flink.apache.org
Subject: Re: Kafka0.8.2.1 + Flink0.9.0 issue
Dear Hawin,
This looks like a dependency issue, the java compiler does not
"LA_" + messageNo);
producer.send(new KeyedMessage(topic, messageStr));
messageNo++;
}
}
Best regards
Hawin
Hi Robert
Congrats for your presentation. I have downloaded your slides.
Hopefully Flink can move forward quickly.
Best regards
Hawin
On Wed, Jun 10, 2015 at 10:14 PM, Robert Metzger
wrote:
> Hi Hawin,
>
> here are the slides:
> http://www.slideshare.net/robertmetzger1/apache-fl
Hi Michels
I don't think you can watch them online now.
Can someone share their presentations or feedback to us?
Thanks
Best regards
Hawin
On Mon, Jun 8, 2015 at 2:34 AM, Maximilian Michels wrote:
> Thank you for your kind wishes :) Good luck from me as well!
>
> I was just
Thanks Marton
I will use this code to implement my testing.
Best regards
Hawin
On Wed, Jun 10, 2015 at 1:30 AM, Márton Balassi
wrote:
> Dear Hawin,
>
> You can pass a hdfs path to DataStream's and DataSet's writeAsText and
> writeAsCsv methods.
> I assume that yo
Hi All
Can someone tell me what is the best way to write data to HDFS when Flink
received data from Kafka?
Big thanks for your example.
Best regards
Hawin
Hey Aljoscha
I also sent an email to Bill for asking the latest test results. From
Bill's email, Apache Spark performance looks like better than Flink.
How about your thoughts.
Best regards
Hawin
On Tue, Jun 9, 2015 at 2:29 AM, Aljoscha Krettek
wrote:
> Hi,
> we don't
t;
> Regards,
> Aljoscha
>
> On Mon, Jun 8, 2015 at 9:09 PM, Hawin Jiang wrote:
>
>> Hi Aljoscha
>>
>> I want to know what is the apache flink performance if I run the same SQL
>> as below.
>> Do you have any apache flink benchmark information?
>
:03 AM, Aljoscha Krettek
wrote:
> Hi,
> actually, what do you want to know about Flink SQL?
>
> Aljoscha
>
> On Sat, Jun 6, 2015 at 2:22 AM, Hawin Jiang wrote:
> > Thanks all
> >
> > Actually, I want to know more info about Flink SQL and Flink performance
>
Flink deep-dive
Time: 1:45pm - 2:25pm 2015/06/10
Speakers: Kostas Tzoumas and Robert Metzger
Topic: Flexible and Real-time Stream Processing with Apache Flink
Time: 3:10pm - 3:50pm 2015/06/11
Speakers: Kostas Tzoumas and Robert Metzger
Best regards
Hawin
Thanks all
Actually, I want to know more info about Flink SQL and Flink performance
Here is the Spark benchmark. Maybe you already saw it before.
https://amplab.cs.berkeley.edu/benchmark/
Thanks.
Best regards
Hawin
On Fri, Jun 5, 2015 at 1:35 AM, Fabian Hueske wrote:
> If you want
Hi Aljoscha
Thanks for your reply.
Do you have any tips for Flink SQL.
I know that Spark support ORC format. How about Flink SQL?
BTW, for TPCHQuery10 example, you have implemented it by 231 lines of code.
How to make that as simple as possible by flink.
I am going to use Flink in my future
Hi Chiwan
Thanks for your information. I knew Flink is not DBMS. I want to know what
is the flink way to select, insert, update and delete data on HDFS.
@Till
Maybe union is a way to insert data. But I think it will cost some
performance issue.
@Stephan
Thanks for your suggestion. I have ch
at 10:44 AM, Robert Metzger wrote:
> Yes, I've got this message.
>
> On Thu, Jun 4, 2015 at 7:42 PM, Hawin Jiang wrote:
>
>> Hi Admin
>>
>> Please let me know if you are received my email or not.
>> Thanks.
>>
>>
>>
>> Best reg
Hi Admin
Please let me know if you are received my email or not.
Thanks.
Best regards
Hawin Jiang
On Thu, Jun 4, 2015 at 10:26 AM, wrote:
> Hi! This is the ezmlm program. I'm managing the
> user@flink.apache.org mailing list.
>
> Acknowledgment: I have added the addres
35 matches
Mail list logo