@Chiwan: let me know if you need hands-on support. I'll be more then happy
to help (as my downstream project is using Scala 2.11).
2015-07-01 17:43 GMT+02:00 Chiwan Park :
> Okay, I will apply this suggestion.
>
> Regards,
> Chiwan Park
>
> > On Jul 1, 2015, at 5:41 PM, Ufuk Celebi wrote:
> >
>
I witnessed a similar issue yesterday on a simple job (single task chain,
no shuffles) with a release-0.9 based fork.
2015-04-15 14:59 GMT+02:00 Flavio Pompermaier :
> Yes , sorry for that..I found it somewhere in the logs..the problem was
> that the program didn't die immediately but was somehow
The flink optimizer will basically do two things for you (+ some other
magic regarding iterations which I will leave aside for now)
1) Algorithm selection: e.g., choose between a hash-join or merge-join;
2) Partitioning strategy: track grouping key, and if partitioning is
needed, pick a suitable
ious mail answer these questions as well?
>
> Cheers, Fabian
>
> 2015-05-18 22:03 GMT+02:00 Alexander Alexandrov <
> alexander.s.alexand...@gmail.com>:
>
>> In the dawn of Flink when Flink Operators were still called PACTs (short
>> for Parallelization Contra
Sorry, we're using a forked version which changed groupID.
2015-05-19 15:15 GMT+02:00 Till Rohrmann :
> I guess it's a typo: "eu.stratosphere" should be replaced by
> "org.apache.flink"
>
> On Tue, May 19, 2015 at 1:13 PM, Alexander Alexandrov <
>
Thanks for the feedback, Fabian.
This is related to the question I sent on the user mailing list yesterday.
Mustafa is working on a master thesis where we try to abstract an operator
for the update of stateful datasets (decoupled from the current native
iterations logic) and use it in conjunction
We managed to do this with the following config:
// properties
2.2.0
0.9-SNAPSHOT
1.2.1
// form the dependency management
org.apache.hadoop
In the dawn of Flink when Flink Operators were still called PACTs (short
for Parallelization Contracts) the system used to support the so called
"output contracts" via custom annotations that can be attached to the UDF
(the ForwardedFields annotation is a descendant of that concept).
Amonst others
Yes, I expect to have one in the next few weeks (the code is actually
there, but we need to port it to the Flink ML API). I suggest to follow the
JIRA issue in the next weeks to check when this is done:
https://issues.apache.org/jira/browse/FLINK-1731
Regards,
Alexander
PS. Bear in mind that we
AFAIK at the moment this is not supported but at the TU Berlin we have a
master student working on this feature, so it might be possible within the
next 3-6 months.
Regards,
Alexander
2015-03-02 17:01 GMT+01:00 Malte Schwarzer :
> Hi everyone,
>
> I read that Flink is supposed to automatically o
True.
2015-02-10 19:14 GMT+01:00 Vasiliki Kalavri :
> Hi,
>
> It's hard to tell without details about your algorithm, but what you're
> describing sounds to me like something you can use the workset for.
>
> -V.
> On Feb 10, 2015 6:54 PM, "Alexander
I am not sure whether this is supported at the moment. The only workaround
I could think of is indeed to use a boolean flag that indicates whether the
element has been deleted or not.
An alternative approach is to ditch Flink's native iteration construct and
write your intermediate results to Tach
I am trying to use the TypeSerializer IO formats to write temp data to
disk. A gist with a minimal example can be found here:
https://gist.github.com/aalexandrov/90bf21f66bf604676f37
However, with the current setting I get the following error with the
TypeSerializerInputFormat:
Exception in thre
I don't any reason why the Scala approach should not work in Java. For
example, the flink-graph API seems to be built on top of this concept (in
Java):
https://github.com/project-flink/flink-graph/blob/master/src/main/java/flink/graphs/Graph.java
2015-01-27 23:45 GMT+01:00 Flavio Pompermaier :
>
the current state.
> I checked out, just before christmas last year and haven't merged yet.
>
> Mathias
>
>
> On 17.01.2015 13:53, Alexander Alexandrov wrote:
> > Are you looking at the master (0.9-SNAPSHOT) branch?
> >
> > 2015-01-17 13:24 GMT+01:00 Mathias Pe
Are you looking at the master (0.9-SNAPSHOT) branch?
2015-01-17 13:24 GMT+01:00 Mathias Peters :
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> is there still a way to schedule jobs to the JobManager on after each
> other?
>
> I remember the "-w" flag in an earlier version for that
and TM) and the other ones for JM and TM respectively. I don't know if this
> is really necessary though. Because no one else asked for it, I think it's
> fine to go with what suits your use case.
>
> – Ufuk
>
> On 15 Jan 2015, at 14:25, Alexander Alexandrov <
> alexan
uot;jobmanager.env.java.classpath" as a configuration key? And maybe an
> additional "env.java.classpath"?
>
> PS: sorry for the incomplete mail before :D
>
> On 15 Jan 2015, at 01:00, Alexander Alexandrov <
> alexander.s.alexand...@gmail.com> wrote:
>
> &
Sorry, I meant between the three. In particular, where will
env.java.classpath propagate?
2015-01-15 14:24 GMT+01:00 Alexander Alexandrov <
alexander.s.alexand...@gmail.com>:
> No worries, this is the first reply that I get in this thread.
>
> What would be the difference between
Hi there,
is there a canonical / suggested way to customize the classpath of the
Flink JM and TM processes?
At the moment I hardcoded my way around this by manually changing the
following lines like that:
# in taskmanager.sh
$JAVA_RUN [...] -classpath "/tmp/classes:$FLINK_TM_CLASSPATH"
# in job
20 matches
Mail list logo