Hi Jon,
You can connect Flink Web UI via clicking ApplicationMaster link in YARN
administrator UI.
Regards,
Chiwan Park
> On Aug 15, 2016, at 2:24 PM, Jon Yeargers wrote:
>
> Working with a 3 node cluster. Started via YARN.
>
> If I go to port 8080 I see the Tomcat start s
community has a plan [1] to move data structures for streaming
operators to managed memory.
[1]:
https://docs.google.com/document/d/1ExmtVpeVVT3TIhO1JoBpC5JKXm-778DAD7eqw5GANwE/edit#
Regards,
Chiwan Park
> On Jun 22, 2016, at 8:39 PM, Tae-Geon Um wrote:
>
> Thank you for your ans
. `RocksDBStateBackend` uses memory first and also
can spill states to disk.
Regards,
Chiwan Park
> On Jun 22, 2016, at 3:27 PM, Tae-Geon Um wrote:
>
> I have another question.
> Is the spilling only executed on batch mode?
> What happen on streaming mode?
>
>> On Jun 22, 20
Hi all,
I think we can use `readFile`, `readFileStream` methods in
`StreamExecutionEnvironment` to create streaming source from S3 because data
are stored as file in S3. But I haven’t test it.
Regards,
Chiwan Park
> On Jun 3, 2016, at 2:37 PM, Tzu-Li (Gordon) Tai wrote:
>
>
I’ve merged a patch [1] for this issue. Now we can use Option as a key.
[1]:
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commit;h=c60326f85faaa38bcc359d555cd2d2818ef2e4e7
Regards,
Chiwan Park
> On Apr 5, 2016, at 2:08 PM, Chiwan Park wrote:
>
> I just found that Timur
Hi Lydia,
`FlinkMLTools.persist` method is used to save ML models and can be used to save
Matrix and Vector object. Note that the method uses TypeSerializerOutputFormat
which is binary output format.
Regards,
Chiwan Park
> On May 30, 2016, at 11:31 AM, Lydia Ickler wrote:
>
> H
`.
Regards,
Chiwan Park
> On Apr 28, 2016, at 9:51 AM, nsengupta wrote:
>
> Hello Chiwan,
>
> Yes, that's an oversight on my part. In my hurry, I didn't even try to
> explore the source of that /Exception/. Thanks, again.
>
> However, I still don't know why I am
Hi,
You don’t need to call execute() method after calling print() method. print()
method triggers the execution. The exception is raised because you call
execute() after print() method.
Regards,
Chiwan Park
> On Apr 27, 2016, at 6:35 PM, nsengupta wrote:
>
> Till,
>
> Tha
headers by calling zipWithIndex method and filtering it based on the index.
Regards,
Chiwan Park
> On Apr 27, 2016, at 10:32 AM, nsengupta wrote:
>
> What is the recommended way of discarding the Column Header(s) from a CSV
> file, if I am using
>
> /enviro
Hi Mich,
You can add external dependencies to Scala shell using `--addclasspath` option.
There is more detail description in documentation [1].
[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/scala_shell.html#adding-external-dependencies
Regards,
Chiwan Park
> On Apr
Hi Timur,
Great! Bootstrap action for Flink is good for AWS users. I think the bootstrap
action scripts would be placed in `flink-contrib` directory.
If you want, one of people in PMC of Flink will be assign FLINK-1337 to you.
Regards,
Chiwan Park
> On Apr 6, 2016, at 3:36 AM, Timur Fayru
I just found that Timur created a JIRA issue for this (FLINK-3698).
Regards,
Chiwan Park
> On Mar 31, 2016, at 7:27 PM, Till Rohrmann wrote:
>
> Actually I think that it’s not correct that the OptionType cannot be used as
> a key type. In fact it is similar to a composite type a
]:
https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
[2]:
https://cwiki.apache.org/confluence/display/FLINK/Maven+artifact+names+suffixed+with+Scala+version
Regards,
Chiwan Park
> On Apr 5, 2016, at 9:40 AM, Andrew Gaydenko wrote:
>
> Hi!
&g
type of KeySelector is `Int`. `TypeInformation` is not generic type.
Regards,
Chiwan Park
> On Mar 31, 2016, at 1:09 AM, Timur Fayruzov wrote:
>
> Thank you for your answers, Chiwan! That would mean that a generic type can't
> be used as a key in general? This is a non-obvi
original value). So there is some performance decrease when you
are using KeySelector.
Regards,
Chiwan Park
> On Mar 31, 2016, at 12:58 AM, Timur Fayruzov wrote:
>
> Thank you Chiwan! Yes, I understand that there are workarounds that don't use
> function argument (and th
ight: MyKey) => (left, right)
}.print()
```
Note that the approach in example (using hashCode()) cannot be applied to sort
task.
Regards,
Chiwan Park
> On Mar 30, 2016, at 2:37 AM, Timur Fayruzov wrote:
>
> There is some more detail to this question that I missed initially. It tu
1) {
(left, right) => 1
}
```
I hope this helps.
[1]:
https://ci.apache.org/projects/flink/flink-docs-master/apis/common/index.html#define-keys-for-tuples
Regards,
Chiwan Park
> On Mar 30, 2016, at 3:54 AM, Timur Fayruzov wrote:
>
> Hello,
>
> Another issue I have encoun
.
Regards,
Chiwan Park
[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/iterations.html
> On Mar 27, 2016, at 7:16 AM, Lydia Ickler wrote:
>
> Hi,
>
> I have an issue with a for-loop.
> If I set the maximal iteration number i to more than 3 it gets stu
Hi subash,
You can pass WriteMode in second parameter of write* method. For example:
```
DataStream<…> myStream = …;
myStream.writeAsCsv(“path of output”, FileSystem.WriteMode.OVERWRITE);
```
I hope this helps.
Regards,
Chiwan Park
> On Mar 22, 2016, at 8:18 PM, subash basn
object.
Regards,
Chiwan Park
[1]:
https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/sink/RichSinkFunction.html
> On Mar 7, 2016, at 10:08 PM, tole...@toletum.org wrote:
>
> Hi!
> I'm doing a process which reads from kafka,
We’re testing a release candidate for 1.0 [1] currently. You can use new
features I’m not sure because I’m not in PMC of Flink but I think we can
release in a month.
Regards,
Chiwan Park
[1]:
http://mail-archives.apache.org/mod_mbox/flink-user/201602.mbox/%3CCAGr9p8AkiT0CT_YBwMhHCUYmoC9Stw
Flink blog [3].
Regards,
Chiwan Park
[1]: https://cwiki.apache.org/confluence/display/FLINK/1.0+Release
[2]:
https://cwiki.apache.org/confluence/display/FLINK/Maven+artifact+names+suffixed+with+Scala+version
[3]: http://flink.apache.org/blog/
> On Feb 17, 2016, at 3:34 PM, wangzhijiang
Hi David,
I just downloaded the "flink-1.0-SNAPSHOT-bin-hadoop2_2.11.tgz” but there is no
jar compiled with Scala 2.10. Could you check again?
Regards,
Chiwan Park
> On Feb 10, 2016, at 2:59 AM, David Kim
> wrote:
>
> Hello,
>
> I noticed that the flink binary fo
The documentation I sent is for Flink 1.0.
In Flink 0.10.x, there is no suffix of dependencies for Scala 2.10 (e.g.
flink-streaming-java). But there is a suffix of dependencies for Scala 2.11
(e.g. flink-streaming-java_2.11).
Regards,
Chiwan Park
> On Feb 10, 2016, at 1:46 PM, Chiwan P
+names+suffixed+with+Scala+version
Regards,
Chiwan Park
> On Feb 10, 2016, at 9:39 AM, shotte wrote:
>
> Do I need to go to Flink 1.0 or the downgrade to Kafka 0.8 ?
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2336050
I wrote a sample inherited POJO example [1]. The example works with Flink
0.10.1 and 1.0-SNAPSHOT.
[1]: https://gist.github.com/chiwanpark/0389ce946e4fff58d611
Regards,
Chiwan Park
> On Feb 9, 2016, at 8:07 PM, Fabian Hueske wrote:
>
> What is the type of sessionId?
> It must b
Oh, the fields in SourceA have public getters. Does the fields in SourceA have
public setter? SourceA needs public setter for private fields.
Regards,
Chiwan Park
> On Feb 9, 2016, at 7:45 PM, Chiwan Park wrote:
>
> Hi Dominique,
>
> It seems that `SourceA` is not dealt a
Hi Dominique,
It seems that `SourceA` is not dealt as POJO. Are all fields in SourceA public?
There are some requirements for POJO classes [1].
[1]:
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/programming_guide.html#pojos
Regards,
Chiwan Park
> On Feb 9, 2016, at 7
-3330
Regards,
Chiwan Park
> On Feb 4, 2016, at 5:39 AM, Sourigna Phetsarath
> wrote:
>
> All:
>
> I'm trying to use SparseVectors with FlinkML 0.10.1. It does not seem to be
> working. Here is a UnitTest that I created to recreate the problem:
>
>
>
#transformations
Regards,
Chiwan Park
> On Jan 30, 2016, at 6:43 PM, LINZ, Arnaud wrote:
>
> Hello,
>
> I have a very big dataset A to left join with a dataset B that is half its
> size. That is to say, half of A records will be matched with one record of B,
> and the other
,
Chiwan Park
[1]:
https://hive.apache.org/javadocs/r0.13.1/api/ql/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.html
[2]:
http://stackoverflow.com/questions/22673222/how-do-you-use-orcfile-input-output-format-in-mapreduce
[3]:
https://ci.apache.org/projects/flink/flink-docs-release-0.10/api
There is a JIRA issue (FLINK-1873, [1]) that covers the distributed matrix
implementation.
[1]: https://issues.apache.org/jira/browse/FLINK-1873
Regards,
Chiwan Park
> On Jan 27, 2016, at 5:21 PM, Chiwan Park wrote:
>
> I hope the distributed matrix and vector implementation
I hope the distributed matrix and vector implementation on Flink. :)
Regards,
Chiwan Park
> On Jan 27, 2016, at 2:29 AM, Lydia Ickler wrote:
>
> Hi Till,
>
> maybe I will do that :)
> If I have some other questions I will let you know!
>
> Best regards,
> Lydia
&
SelectNearestCenter class, euclideanDistance method is used to
measure the distance between each point. For your implementation, you have to
substitute type to your data type (It can be your custom class or
Flink-provided Tuple) and change distance metric for your data.
Regards,
Chiwan Park
> On Jan
Thanks for sharing, Ritesh!
Regards,
Chiwan Park
> On Jan 21, 2016, at 12:28 AM, Ritesh Kumar Singh
> wrote:
>
> Thanks for the update Robert, I tried it out and it works fine for
> scala_2.11.4 version.
> I've made a docker image of the same and put it up on the hub
t; Saliya Ekanayake
> Ph.D. Candidate | Research Assistant
> School of Informatics and Computing | Digital Science Center
> Indiana University, Bloomington
> Cell 812-391-4914
> http://saliya.org
>
Regards,
Chiwan Park
me few pointers.
>
> Thanks
> Ashutosh
Regards,
Chiwan Park
include the tokenize function into the closures of flatMap functions,
> the Job works fine; see example TFIDFApp.
> To avoid this unexpected behavior I don't have use Scala App trait, see
> TFIDF, but why?
>
>
> Thanks,
> Andrea
>
Regards,
Chiwan Park
[INFO] flink-runtime .. FAILURE [01:23
> min]
> [INFO] flink-optimizer SKIPPED
>
>
> Any workaround for scala_2.11.4 or do I have to switch back to scala_2.10.4 ?
>
> Thanks,
> Ritesh Kumar Singh,
> https://riteshtoday.wordpress.com/
>
Regards,
Chiwan Park
n wrote:
> Hi!
>
> I think we missed updating the variable "version" in the "docs/_config.yml"
> for the 0.10.1 release.
>
> Would be good to update it and push a new version of the docs.
>
> Greetings,
> Stephan
>
> On Fri, Jan
updated to announce latest stable
version to newcomers.
Is there any problem to update doc?
Regards,
Chiwan Park
Great! Thanks for addressing!
> On Jan 6, 2016, at 5:51 PM, Stephan Ewen wrote:
>
> At a first look, I think that "flink-runtime" does not need Apache Httpclient
> at all. I'll try to simply remove that dependency...
>
> On Wed, Jan 6, 2016 at 7:14 AM, Chiw
in.
>
> I have not tested this behavior extensibly so far. Noticeably, I was not able
> to reproduce it by just starting a session and then ending it again right
> away without looking at the JobManager web interface. Maybe this produces
> some kind of lag as far as YAR
al Cheng Kung University, Graduate Institute of Computer and
> Communication Engineering
> High Performance Parallel and Distributed Systems Laboratory (HPDS Lab)
> 國立成功大學電機工程學系 - 電腦與通信工程研究所
> 高效能平行/分散系統實驗室 (HPDS Lab)
>
> National Cheng Kung University, Engineering Science Dpt.
> 國立成功大學工程科學系
>
> Contacts
> tzuli...@ee.ncku.edu.tw
> http://tzulitai.ee.ncku.edu.tw
> Linkedin: tw.linkedin.com/in/tzulitai
> +886981916890
>
Regards,
Chiwan Park
because of lack of permission. How can I solve this problem?
Regards,
Chiwan Park
mory available in all the slave nodes ? Will it
> serialize the memory in the disks of respective slave nodes by default ?
>
> Regards,
> Sourav
>
>
> On Mon, Dec 28, 2015 at 4:13 PM, Chiwan Park wrote:
> Hi Filip,
>
> Spark executes job also lazily. But It is slight
> this year:
>
> https://flink.apache.org/news/2015/12/18/a-year-in-review.html
>
> Happy New Year everyone and thanks for being part of this great community!
>
>
> Thanks,
>
> - Henry
Regards,
Chiwan Park
>
> What is the equivalent of Spark's RDD in Flink ? In my understanding the
> closes think is DataSet API. But wanted to reconfirm.
>
> Also using DataSet API if I ingest a large volume of data (val lines :
> DataSet[String] = env.readTextFile()), which may not
> fit in single slave node, will that data get automatically distributed in the
> memory of other slave nodes ?
>
> Regards,
> Sourav
>
Regards,
Chiwan Park
g is right? becaues they use JVM's resource for
> memory, cpu.
>
> Is there any linux application you use for metrics?
>
> Best,
> Phil.
Regards,
Chiwan Park
,
> classOf[LongWritable],
> classOf[Text],
> new JobConf()
> ))
>
> The Java version is very similar.
>
> Note: Flink has wrappers for both MR APIs: mapred and mapreduce.
>
> Cheers,
> Fabian
>
> 2015-11-24 19:36 GMT+01:00 Chiwan Park :
> I’m
;
> I completely missed this, thanks Chiwan. Can these be used with DataStreams
> as well as DataSets?
>
> On Tue, Nov 24, 2015 at 10:06 AM, Chiwan Park wrote:
> Hi Nick,
>
> You can use Hadoop Input/Output Format without modification! Please check the
> documentation
gt;
> Is it possible to use existing Hadoop Input and OutputFormats with Flink?
> There's a lot of existing code that conforms to these interfaces, seems a
> shame to have to re-implement it all. Perhaps some adapter shim..?
>
> Thanks,
> Nick
Regards,
Chiwan Park
Oh, sorry for wrong information.
I have misunderstood about `jarFiles` parameter.
Regards,
Chiwan Park
> On Sep 25, 2015, at 5:27 PM, Fabian Hueske wrote:
>
> Hi Deng Jie,
>
> your Flink program needs to be packaged into a JAR file.
> The Flink quickstart Maven archetype
Hi Deng,
It sounds weird. In code [1], `jarFiles` parameter is defined as a varargs
parameter. From this, we can omit the parameter.
Which version of Flink are you using?
Regards,
Chiwan Park
[1]
https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java
Hi Deng,
The jarFiles parameter of `createRemoteEnvironment` means that the path of your
custom library jar. If you don’t need custom library, you can omit the
parameter.
Regards,
Chiwan Park
> On Sep 25, 2015, at 10:48 AM, Deng Jie wrote:
>
> Dear Flink org,i have same ques
Hi Felix,
You can change the listening port of jobmanager web frontend by setting
`jobmanager.web.port` in configuration (conf/flink-conf.yml).
I attached a link of documentation [1] about this.
Regards,
Chiwan Park
[1]
https://ci.apache.org/projects/flink/flink-docs-release-0.9/setup
[3] https://github.com/apache/flink/pull/1134
Regards,
Chiwan Park
> On Sep 17, 2015, at 1:33 AM, Chiwan Park wrote:
>
> It seems like a bug of CsvInputFormat. I succeed in reproducing in my local
> machine.
> I will create a JIRA issue for this and submit a patch to fix it.
>
It seems like a bug of CsvInputFormat. I succeed in reproducing in my local
machine.
I will create a JIRA issue for this and submit a patch to fix it.
Which version of Flink are used?
Regards,
Chiwan Park
> On Sep 17, 2015, at 12:20 AM, Giacomo Licari wrote:
>
> Yes I did.
>
>
Hi Giacomo,
Did you create constructors without arguments in both base class and derived
class?
If you do, it seems like a bug.
Regards,
Chiwan Park
> On Sep 17, 2015, at 12:04 AM, Giacomo Licari wrote:
>
> Hi Chiwan,
> I followed instructions in documentation.
> I have a si
Hi Giacomo,
You should set your field as public. If you are set your field as private or
protected, the class must provide getter and setter to be treated as POJO.
Maybe the documentation in homepage [1] would be helpful.
Regards,
Chiwan Park
[1]
https://ci.apache.org/projects/flink/flink
you want to use broadcast variable. You
can do same thing with filter and join operations. Here is my implementation
[1].
Regards,
Chiwan Park
[1] https://gist.github.com/chiwanpark/a0b0269c9a9b058d15d3
> On Sep 4, 2015, at 3:51 AM, hagersaleh wrote:
>
> Hi Chiwan Park
> not und
+1 for dropping Hadoop 2.2.0
Regards,
Chiwan Park
> On Sep 4, 2015, at 5:58 AM, Ufuk Celebi wrote:
>
> +1 to what Robert said.
>
> On Thursday, September 3, 2015, Robert Metzger wrote:
> I think most cloud providers moved beyond Hadoop 2.2.0.
> Google's Click-To-De
Hi hagersaleh,
Sorry for late reply.
I think using an external system could be a solution for large scale data. To
use an external system, you have to implement rich functions such as
RichFilterFunction, RichMapFunction, …, etc.
Regards,
Chiwan Park
> On Aug 30, 2015, at 1:30
Hi Michele,
We’re doing release process for 0.9.1. Ufuk Celebi will start vote for 0.9.1
release soon.
Regards,
Chiwan Park
> On Aug 27, 2015, at 6:49 PM, Michele Bertoni
> wrote:
>
> Hi everybody,
> I am still waiting for version 0.9.1 to solve this problem, any idea on whe
Additionally, If you have any questions about contributing, please send a mail
to dev mailing list.
Regards,
Chiwan Park
> On Aug 27, 2015, at 2:11 PM, Chiwan Park wrote:
>
> Hi Naveen,
>
> There is a guide document [1] about contribution in homepage. Please read
> first b
`, or `easyfix`.
Happy contributing!
Regards,
Chiwan Park
[1] http://flink.apache.org/how-to-contribute.html
[2] http://flink.apache.org/coding-guidelines.html
[3]
https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20resolution%20%3D%20Unresolved%20AND%20labels%20%3D%20starter
Hi Hermann,
In 16 page of Slim’s slides [1], there is a pre-installed virtual machine based
on VMWare. I haven’t run Flink on that machine. But maybe It works.
Regards,
Chiwan Park
[1]
http://www.slideshare.net/sbaltagi/apache-flinkcrashcoursebyslimbaltagiandsrinipalthepu
> On Aug 19, 2
documentation. The documentation will help you to understand the
structure of Flink program.
Regards,
Chiwan Park
[1]
https://ci.apache.org/projects/flink/flink-docs-release-0.9/apis/programming_guide.html#data-sinks
[2]
https://ci.apache.org/projects/flink/flink-docs-release-0.9/apis
You can increase Flink managed memory by increasing Taskmanager JVM Heap
(taskmanager.heap.mb) in flink-conf.yaml.
There is some explanation of options in Flink documentation [1].
Regards,
Chiwan Park
[1]
https://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#common-options
want to know more detail of key specifying
method
in Flink, please see the documentation [2] in Flink homepage.
Regards,
Chiwan Park
[1] https://gist.github.com/chiwanpark/e71d27cc8edae8bc7298
[2]
https://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#specifying-keys
Hi, If you use `partitionCustom()` method [1] with custom partitioner, you can
guarantee the order of partition.
Regards,
Chiwan Park
[1]
https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/api/java/DataSet.html#partitionCustom
If you would search a graphical interface for data analytics like Jupyter, you
should look Apache Zeppelin [1].
Apache Zeppelin is a web-based notebook. It supports Scala, Spark and Flink.
Regards,
Chiwan Park
[1] https://zeppelin.incubator.apache.org
> On Jul 13, 2015, at 9:23 PM, T
Because there is no default implementations like forany in scala, I use forall
method. Note that ANY (condition) is equivalent as NOT ALL (NOT condition).
Regards,
Chiwan Park
> On Jul 12, 2015, at 5:39 AM, hagersaleh wrote:
>
> why in this use ! and <= in handle Any
>over
Hi, you should use RichMapFunction not MapFunction. The difference between
RichMapFunction and MapFunction is described in Flink documentation [1].
Regards,
Chiwan Park
[1]
https://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#rich-functions
> On Jul 12, 2015, a
the functions.
I think that It is good to read Batch API section of Flink documentation for
you.
If you have a question for the example, please reply mail to user mailing list.
Regards,
Chiwan Park
[1] https://gist.github.com/chiwanpark/5e2a6ac00b7e0bf85444
[2]
https://ci.apache.org/projects
I found that the patch had been merged to upstream. [1] :)
Regards,
Chiwan Park
[1] https://github.com/apache/flink/pull/855
> On Jul 3, 2015, at 5:26 PM, Welly Tambunan wrote:
>
> Thanks Chiwan,
>
>
> Glad to hear that.
>
>
> Cheers
>
> On Fri, Ju
Hi tambunanw,
The issue is already known and we’ll patch soon. [1]
In next release (maybe 0.9.1), the problem will be solved.
Regards,
Chiwan Park
[1] https://issues.apache.org/jira/browse/FLINK-2257
> On Jul 3, 2015, at 4:57 PM, tambunanw wrote:
>
> Hi All,
>
> I'm t
@Alexander I’m happy to hear that you want to help me. If you help me, I really
appreciate. :)
Regards,
Chiwan Park
> On Jul 2, 2015, at 2:57 PM, Alexander Alexandrov
> wrote:
>
> @Chiwan: let me know if you need hands-on support. I'll be more then happy to
> help (as m
Okay, I will apply this suggestion.
Regards,
Chiwan Park
> On Jul 1, 2015, at 5:41 PM, Ufuk Celebi wrote:
>
>
> On 01 Jul 2015, at 10:34, Stephan Ewen wrote:
>
>> +1, like that approach
>
> +1
>
> I like that this is not breaking for non-Scala users :-)
Hi,
We already know this issue. There are some problems in Apache Infrastructure.
Infra Team is working on this issue. You can see progress via a blog post [1].
It will be okay soon.
Regards,
Chiwan Park
[1] https://blogs.apache.org/infra/entry/buildbot_master_currently_off_line
> On Jun
It represents the folder containing the hadoop config files. :)
Regards,
Chiwan Park
> On Jun 25, 2015, at 10:07 PM, Flavio Pompermaier wrote:
>
> fs.hdfs.hadoopconf represents the folder containing the hadoop config files
> (*-site.xml) or just one specific hadoop config fil
How to contribute, and coding guidelines are also duplicated on the web site
and the documentation.
I think this duplication is not needed. We need to merge the duplication.
Regards,
Chiwan Park
> On Jun 25, 2015, at 9:01 PM, Maximilian Michels wrote:
>
> Thanks. Fixed. Actually, th
I’m interested in working on this. :) I’ll assign to me.
Regards,
Chiwan Park
> On Jun 21, 2015, at 8:22 AM, Robert Metzger wrote:
>
> Okay, it seems like we have consensus on this. Who is interested in working
> on this? https://issues.apache.org/jira/browse/FLINK-2200
>
&
-docs-master/internals/logging.html
[3] http://stackoverflow.com/a/3810936
Regards,
Chiwan Park
> On Jun 19, 2015, at 8:05 PM, Juan Fumero
> wrote:
>
> Hi,
> is there any option (from API level) to redirect the log messages to a
> file instead of stdout?
>
> Than
/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftasks-java-local-configuration.htm
[2]
https://www.jetbrains.com/idea/help/creating-and-editing-run-debug-configurations.html
Regards,
Chiwan Park
> On Jun 17, 2015, at 2:01 PM, Sebastian wrote:
>
> Hi,
>
> Flink has memory
, …, etc. with
version variation.
So we can reduce a number of deployed modules.
Regards,
Chiwan Park
> On Jun 13, 2015, at 9:17 AM, Robert Metzger wrote:
>
> I agree that we should ship a 2.11 build of Flink if downstream projects need
> that.
>
> The only thing that we s
ey2, (values, …), (key3,
(values, …), ...
Regards,
Chiwan Park
> On Jun 11, 2015, at 11:01 PM, Maximilian Alber
> wrote:
>
> Hi Flinksters,
>
> I tried to call collect on a grouped data set, somehow it did not work. Is
> this intended? If yes, why?
>
> Code snip
But I think uploading Flink API with scala 2.11 to maven repository is nice
idea.
Could you create a JIRA issue?
Regards,
Chiwan Park
> On Jun 10, 2015, at 10:23 PM, Chiwan Park wrote:
>
> No. Currently, there are no Flink binaries with scala 2.11 which are
> downloadable.
No. Currently, there are no Flink binaries with scala 2.11 which are
downloadable.
Regards,
Chiwan Park
> On Jun 10, 2015, at 10:18 PM, Philipp Goetze
> wrote:
>
> Thank you Chiwan!
>
> I did not know the master has a 2.11 profile.
>
> But there is no pre-built
Hi. You can build Flink with Scala 2.11 with scala-2.11 profile in master
branch.
`mvn clean install -DskipTests -P \!scala-2.10,scala-2.11` command builds Flink
with Scala 2.11.
Regards,
Chiwan Park
> On Jun 10, 2015, at 9:56 PM, Flavio Pompermaier wrote:
>
> Nice!
>
> On 10
)
// updating
// UPDATE data SET value1 = “updated”, value2 = “data” WHERE id = 1, but
DataSet data is not changed.
val updatedData = data.map { x => if (x.id == 1) MyType(x.id, “updated”,
“data”) else x }
Regards,
Chiwan Park
> On Jun 5, 2015, at 9:22 AM, hawin wrote:
>
> Hi Chiwan
>
&g
Hi. We’re already received your questions some hours ago. Here are our answers.
[1]
Regards,
Chiwan Park
[1]
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Apache-Flink-transactions-tc1457.html
> On Jun 5, 2015, at 2:49 AM, Hawin Jiang wrote:
>
> Cool
>
Hi.
Flink is not DBMS. There is no equivalent operation of insert, update, remove.
But you can use map[1] or filter[2] operation to create modified dataset.
I recommend you some sildes[3][4] to understand Flink concepts.
Regards,
Chiwan Park
[1]
http://ci.apache.org/projects/flink/flink-docs
There is a terasort implementation with deprecated API.
https://github.com/apache/flink/blob/master/flink-tests/src/test/java/org/apache/flink/test/recordJobs/sort/TeraSort.java
AFAIK, there is no implementation with current API.
Regards,
Chiwan Park
> On Jun 4, 2015, at 12:17 AM, Bill Spa
Note that sortPartition is implemented in 0.9. Following link shows the example
of sortPartition.
http://ci.apache.org/projects/flink/flink-docs-master/apis/dataset_transformations.html#sort-partition
Regards,
Chiwan Park
> On Jun 2, 2015, at 5:51 PM, hagersaleh wrote:
>
> I want ex
sorted = customers.groupBy(2).sortGroup(0,
Order.DESCENDING).first(10);
System.out.println(sorted.print());
Note that Flink does not support global sort (FLINK-598) but only support local
sort currently. The sortGroup API means that sorting for each group.
Regards,
Chiwan Park
> On Jun 2, 2
Thanks :)
Regards,
Chiwan Park
> On May 29, 2015, at 10:05 PM, Márton Balassi wrote:
>
> Thanks, Max.
>
> On Fri, May 29, 2015 at 3:04 PM, Maximilian Michels <mailto:m...@apache.org>> wrote:
> Fixed it on the master.
>
> Problem were
I fetched master branch and ran again. But I got the same error.
It seems that the problem is related to javadoc. Till’s fix is related to
renaming in flink-ml package.
Regards,
Chiwan Park
> On May 29, 2015, at 5:39 PM, Stephan Ewen wrote:
>
> A bug sneaked in...
>
> I
8.0_45, Maven 3.3.1.
How can I solve this problem?
Regards,
Chiwan Park
99 matches
Mail list logo