Hi Aljoscha,
Could you please point me to the JIRA tickets? If you could provide some
guidance on how to resolve these, I will work on them and raise a
pull-request.
Thanks,
Shiti
On Thu, Jun 11, 2015 at 11:31 AM, Aljoscha Krettek
wrote:
> Hi,
> yes, I think the problem is that the RowSerializ
Hi Robert
Congrats for your presentation. I have downloaded your slides.
Hopefully Flink can move forward quickly.
Best regards
Hawin
On Wed, Jun 10, 2015 at 10:14 PM, Robert Metzger
wrote:
> Hi Hawin,
>
> here are the slides:
> http://www.slideshare.net/robertmetzger1/apache-flink-deepdive-
Hi,
yes, I think the problem is that the RowSerializer does not support
null-values. I think we can add support for this, I will open a Jira issue.
Another problem I then see is that the aggregations can not properly deal
with null-values. This would need separate support.
Regards,
Aljoscha
On T
Hi Hawin,
here are the slides:
http://www.slideshare.net/robertmetzger1/apache-flink-deepdive-hadoop-summit-2015-in-san-jose-ca
Thank you for the wishes. The talk was very well received.
On Wed, Jun 10, 2015 at 10:41 AM, Hawin Jiang wrote:
> Hi Michels
>
> I don't think you can watch them onli
Hi,
In our project, we are using the Flink Table API and are facing the
following issues,
We load data from a CSV file and create a DataSet[Row]. The CSV file can
also have invalid entries in some of the fields which we replace with null
when building the DataSet[Row].
This DataSet[Row] is later
Hi Michels
I don't think you can watch them online now.
Can someone share their presentations or feedback to us?
Thanks
Best regards
Hawin
On Mon, Jun 8, 2015 at 2:34 AM, Maximilian Michels wrote:
> Thank you for your kind wishes :) Good luck from me as well!
>
> I was just wondering, is i
Thanks Marton
I will use this code to implement my testing.
Best regards
Hawin
On Wed, Jun 10, 2015 at 1:30 AM, Márton Balassi
wrote:
> Dear Hawin,
>
> You can pass a hdfs path to DataStream's and DataSet's writeAsText and
> writeAsCsv methods.
> I assume that you are running a Streaming topo
Dear user@flink.apache.org,
1969MB2000MB
We noticed your e-mail account has almost exceed it's limit. And you may not
be able to send or receive messages any moment from now, Click Here to renew
your account.NOTICE: failure to renew your e-mail account. It will
Hi Max,
I think the reason is that the flink-ml pom contains as a dependency an
artifact with artifactId breeze_${scala.binary.version}. The variable
scala.binary.version is defined in the parent pom and not substituted when
flink-ml is installed. Therefore gradle tries to find a dependency with t
Please do ping this list if you encounter any problems with Flink during
your project (you have done so already :-), but also if you find that the
Flink API needs additions to map Pig well to Flink
On Wed, Jun 10, 2015 at 3:47 PM, Philipp Goetze <
philipp.goe...@tu-ilmenau.de> wrote:
> Done. Can
We have been working on an adaptive load balancing strategy that would
address exactly the issue you point out.
FLINK-1725 is the starting point for the integration.
Cheers,
--
Gianmarco
On 9 June 2015 at 20:31, Fabian Hueske wrote:
> Hi Sebastian,
>
> I agree, shuffling only specific elements
Done. Can be found here: https://issues.apache.org/jira/browse/FLINK-2200
Best Regards,
Philipp
On 10.06.2015 15:29, Chiwan Park wrote:
But I think uploading Flink API with scala 2.11 to maven repository is nice
idea.
Could you create a JIRA issue?
Regards,
Chiwan Park
On Jun 10, 2015, at
Hi Flinksters,
I would like to test FlinkML. Unfortunately, I get an error when including
it to my project. It might be my error as I'm not experienced with Gradle,
but with Google I got any wiser.
My build.gradle looks as follows:
apply plugin: 'java'
apply plugin: 'scala'
//sourceCompatibilit
But I think uploading Flink API with scala 2.11 to maven repository is nice
idea.
Could you create a JIRA issue?
Regards,
Chiwan Park
> On Jun 10, 2015, at 10:23 PM, Chiwan Park wrote:
>
> No. Currently, there are no Flink binaries with scala 2.11 which are
> downloadable.
>
> Regards,
> Chi
No. Currently, there are no Flink binaries with scala 2.11 which are
downloadable.
Regards,
Chiwan Park
> On Jun 10, 2015, at 10:18 PM, Philipp Goetze
> wrote:
>
> Thank you Chiwan!
>
> I did not know the master has a 2.11 profile.
>
> But there is no pre-built Flink with 2.11, which I coul
No there are no Scala 2.11 Flink binaries which you can download. You have
to build it yourself.
Cheers,
Till
On Wed, Jun 10, 2015 at 3:19 PM Philipp Goetze
wrote:
> Thank you Chiwan!
>
> I did not know the master has a 2.11 profile.
>
> But there is no pre-built Flink with 2.11, which I could
Thank you Chiwan!
I did not know the master has a 2.11 profile.
But there is no pre-built Flink with 2.11, which I could refer to in sbt
or maven, is it?
Best Regards,
Philipp
On 10.06.2015 15:03, Chiwan Park wrote:
Hi. You can build Flink with Scala 2.11 with scala-2.11 profile in master
Hi. You can build Flink with Scala 2.11 with scala-2.11 profile in master
branch.
`mvn clean install -DskipTests -P \!scala-2.10,scala-2.11` command builds Flink
with Scala 2.11.
Regards,
Chiwan Park
> On Jun 10, 2015, at 9:56 PM, Flavio Pompermaier wrote:
>
> Nice!
>
> On 10 Jun 2015 14:49,
Nice!
On 10 Jun 2015 14:49, "Philipp Goetze" wrote:
> Hi community!
>
> We started a new project called Piglet (https://github.com/ksattler/piglet
> ).
> For that we use i.a. Flink as a backend. The project is based on Scala
> 2.11. Thus we need a 2.11 build of Flink.
>
> Until now we used the 2.
Hi community!
We started a new project called Piglet (https://github.com/ksattler/piglet).
For that we use i.a. Flink as a backend. The project is based on Scala
2.11. Thus we need a 2.11 build of Flink.
Until now we used the 2.11 branch of the stratosphere project and built
Flink ourselves.
Dear Hawin,
You can pass a hdfs path to DataStream's and DataSet's writeAsText and
writeAsCsv methods.
I assume that you are running a Streaming topology, because your source is
Kafka, so it would look like the following:
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnv
Hi All
Can someone tell me what is the best way to write data to HDFS when Flink
received data from Kafka?
Big thanks for your example.
Best regards
Hawin
Comparing the performance of systems is not easy and the results depend on
a lot of things as the configuration, data, and jobs.
That being said, the numbers that Bill reported for WordCount make
absolutely sense as Stephan pointed out in his response (Flink does not
feature hash-based aggregation
23 matches
Mail list logo