@Chiwan: let me know if you need hands-on support. I'll be more then happy
to help (as my downstream project is using Scala 2.11).
2015-07-01 17:43 GMT+02:00 Chiwan Park :
> Okay, I will apply this suggestion.
>
> Regards,
> Chiwan Park
>
> > On Jul 1, 2015, at 5:41 PM, Ufuk Celebi wrote:
> >
>
Okay, I will apply this suggestion.
Regards,
Chiwan Park
> On Jul 1, 2015, at 5:41 PM, Ufuk Celebi wrote:
>
>
> On 01 Jul 2015, at 10:34, Stephan Ewen wrote:
>
>> +1, like that approach
>
> +1
>
> I like that this is not breaking for non-Scala users :-)
Hi Arnaud!
There is a pending issue and pull request that is adding a "cancel()" call
to the command line interface.
https://github.com/apache/flink/pull/750
It would be possible to extend that such that the driver can also cancel
the program.
Greetings,
Stephan
On Wed, Jul 1, 2015 at 3:33 PM
Hello,
I really looked in the documentation but unfortunately I could not find the
answer: how do you cancel your data SourceFunction from your “driver” code
(i.e., from a monitoring thread that can initiate a proper shutdown) ? Calling
“cancel()” on the object passed to the addSource() has no
Do you also have the rest of the code. It would be helpful in order to find
out why it's not working.
Cheers,
TIll
On Wed, Jul 1, 2015 at 1:31 PM, Pa Rö
wrote:
> now i have implement a time logger in the open and close methods, it is
> wrok fine, but i try to initial the flink class with a para
now i have implement a time logger in the open and close methods, it is
wrok fine, but i try to initial the flink class with a parameter (counter
of benchmark round),
but it will initial always with 0. but i get no exception. what i do wrong?
my benchmark class:
public class FlinkBenchmarkLaunche
Okay. We filter files starting with underscores because that is the same
behavior as Hadoop.
Hadoop is always creating some underscore files, so when reading results of
a MapReduce job, Flink would read these files.
On Wed, Jul 1, 2015 at 12:15 PM, Ronny Bräunlich
wrote:
> Hi Robert,
>
> just ig
Hi Robert,
just ignore my previous question.
My files started with underscore and I just found out that FileInputFormat does
filter for underscores in acceptFile().
Cheers,
Ronny
Am 01.07.2015 um 11:35 schrieb Robert Metzger :
> Hi Ronny,
>
> check out this answer on SO:
> http://stackoverfl
Hi Robert,
thank you for your quick answer.
Just one additional question:
When I use the ExecutionEnvironment like this: DataSource files =
env.readTextFile("file:///Users/me/path/to/file/dir“);
Shouldn’t it read all the files in dir? I have three .json files there but when
I print the result, n
Ok wasn't sure if the +1 were just for the FAQ.
On Wed, Jul 1, 2015 at 11:30 AM, Ufuk Celebi wrote:
>
> On 01 Jul 2015, at 11:26, Maximilian Michels wrote:
>
> > I removed the FAQ from the main repository and merged it with the
> website's version.
> >
> > There is still the duplicate "How to C
Hi Ronny,
check out this answer on SO:
http://stackoverflow.com/questions/30599616/create-objects-from-input-files-in-apache-flink
It is a similar use case ... I guess you can get the metadata from the
input split as well.
On Wed, Jul 1, 2015 at 11:30 AM, Ronny Bräunlich
wrote:
> Hello,
>
> I w
Hello,
I want to read a file containing textfiles with Flink.
As I already found out I can simply point the environment to the directory and
it will read all the files.
What I couldn’t find out is if it’s possible to keep the file metadata somehow.
Concrete, I need the timestamp, the filename and
On 01 Jul 2015, at 11:26, Maximilian Michels wrote:
> I removed the FAQ from the main repository and merged it with the website's
> version.
>
> There is still the duplicate "How to Contribute" guide. It suffers from the
> same sync problem.
Just remove it as well. Don't need another round
I removed the FAQ from the main repository and merged it with the website's
version.
There is still the duplicate "How to Contribute" guide. It suffers from the
same sync problem.
On Tue, Jun 30, 2015 at 7:04 PM, Stephan Ewen wrote:
> +1
> for moving the FAQ to the website.
>
> On Tue, Jun 30,
On 01 Jul 2015, at 10:34, Stephan Ewen wrote:
> +1, like that approach
+1
I like that this is not breaking for non-Scala users :-)
+1, like that approach
On Wed, Jul 1, 2015 at 10:28 AM, Robert Metzger wrote:
> (adding dev@ to the conversation)
>
> Chiwan looked into the issue. It seems that we can not add the Scala
> version only to flink-scala, flink-streaming-scala,
> Since flink-runtime also needs scala all modules
(adding dev@ to the conversation)
Chiwan looked into the issue. It seems that we can not add the Scala
version only to flink-scala, flink-streaming-scala,
Since flink-runtime also needs scala all modules are affected by this.
I would vote for naming the Scala 2.10 version of flink modules wi
How about allowing also a varArg of multiple file names for the input
format?
We'd then have the option of
- File or directory
- List of files or directories
- Base directory + regex that matches contained file paths
On Wed, Jul 1, 2015 at 10:13 AM, Flavio Pompermaier
wrote:
> +1 :)
>
> O
Hi Chan,
if you feel up to implementing such an input format, then you can also
contribute it. You simply have to open a JIRA issue and take ownership of
it.
Cheers,
Till
On Wed, Jul 1, 2015 at 10:08 AM, chan fentes wrote:
> Thank you all for your help and for pointing out different possibilit
+1 :)
On Wed, Jul 1, 2015 at 10:08 AM, chan fentes wrote:
> Thank you all for your help and for pointing out different possibilities.
> It would be nice to have an input format that takes a directory and a
> regex pattern (for file names) to create one data source instead of 1500.
> This would h
Thank you all for your help and for pointing out different possibilities.
It would be nice to have an input format that takes a directory and a regex
pattern (for file names) to create one data source instead of 1500. This
would have helped me to avoid the problem. Maybe this can be included in
one
21 matches
Mail list logo