Hi guys,
Janino works perfectly!
It allows to create entire classes on-the-fly and to use them.
Thank you a lot.
Cheers,
Giacomo
On Mon, Sep 21, 2015 at 7:49 PM, Giacomo Licari
wrote:
> Thanks a lot Fabian,
> I will try it.
>
> Cheers,
> Giacomo
>
> On Mon, Sep 21, 2015 at 7:35 PM, Fabian Hue
Of course!
On 21 September 2015 at 19:10, Fabian Hueske wrote:
> The custom partitioner does not know its task id but the mapper that
> assigns the partition ids knows its subtaskid.
>
> So if the mapper with subtask id 2 assigns partition ids 2 and 7, only 7
> will be send over the network.
> O
Thanks a lot Fabian,
I will try it.
Cheers,
Giacomo
On Mon, Sep 21, 2015 at 7:35 PM, Fabian Hueske wrote:
> Hi Giacomo,
>
> you could use Janino [1] to directly compile the code string into a class
> and execute it. The program does not need to be shipped to the cluster if
> all user functions
Hi Giacomo,
you could use Janino [1] to directly compile the code string into a class
and execute it. The program does not need to be shipped to the cluster if
all user functions are contained in the jar.
Cheers, Fabian
[1] http://unkrig.de/w/Janino
On Sep 21, 2015 7:08 PM, "Giacomo Licari" wro
The custom partitioner does not know its task id but the mapper that
assigns the partition ids knows its subtaskid.
So if the mapper with subtask id 2 assigns partition ids 2 and 7, only 7
will be send over the network.
On Sep 21, 2015 6:56 PM, "Stefan Bunk" wrote:
> Hi Fabian,
>
> that sounds g
Hi Robert,
thanks for the reply.
I receive a JSON from my client interface, which contains the dataflow
description.
Then I parse that JSON and the parser creates a string which contains the
Flink code, as the user can modify the dataflow, the description can change
every time it calls "Execute Dat
Hi Fabian,
that sounds good, thank you.
One final question: As I said earlier, this also distributes data in some
unnecessary cases, say ID 4 sends data to ID 3.
Is there no way to find out the ID of the current node? I guess that number
is already available on the node and just needs to be expos
Hi,
you have to make sure that the Flink classes are contained in your class
path.
Either add the flink-dist jar from the binary distribution to your class
path, or use maven to build the backend.jar as a fat jar.
Why are you generating a java class from your dataflows?
Isn't it easier to just ca
Hi guys,
I'm developing a dataflow client whose backend, exported as jar, allows
users to convert dataflows to a Flink file .java with some code inside. The
generated file naturally calls some classes I have in jar file, like
MapFunction, DataSet.
My question is:
How to compile the generated file