Re-Hi,
I have another question regarding the triggering of the processing of a window.
Can this be done in some way at specific time intervals, independent of whether
an event has been received or not, via a trigger?
The reason why I am considering a trigger rather than timeWindow(All) is that
Hi,
I am using a global window to collect some events. I use a trigger to fire the
processing.
Is there any way to get the time of the event that has triggered the processing.
I am asking this as the getMaxTime() field of the GlobalWindow returns MaxLong.
The code skeleton is:
stream
.
Hi Fabian,
Thank you, I think I've a better picture of this now. I think if I set
DataSource tasks (a config option I guess?) equal to input splits that
would do as I expected.
Yes, will keep it at the same place across nodes.
Thank you,
Saliya
On Mon, Jan 25, 2016 at 10:59 AM, Fabian Hueske
Hi Till,
thanks for your reply :)
Yes, it finished after ~27 minutes…
Best regards,
Lydia
> Am 25.01.2016 um 14:27 schrieb Till Rohrmann :
>
> Hi Lydia,
>
> Since matrix multiplication is O(n^3), I would assume that it would simply
> take 1000 times longer than the multiplication of the 100
Hi Christophe,
What HBase version are you using? Have you looked at using the shaded
client jars? Those should at least isolate HBase/Hadoop's Guava version
from that used by your application.
-n
On Monday, January 25, 2016, Christophe Salperwyck <
christophe.salperw...@gmail.com> wrote:
> Hi a
You can start a job and then periodically request and store information
about the running job and vertices from using corresponding REST calls [1].
The data will be in JSON format.
After the job finished, you can stop requesting data.
Next you parse the JSON, extract the information you need and g
Hi Saliya,
the number of parallel splits is controlled by the number of input splits
returned by the InputFormat.createInputSplits() method. This method
receives a parameter minNumSplits with is equal to the number of DataSource
tasks.
Flink handles input splits a bit different from Hadoop. In Ha
Hi Fabian,
Thank you for the information.
So, is there a way I can get the task number within the InputFormat? That
way I can use it to offset the block of rows.
The file size is large to fit in a single process' memory, so the current
setup in MPI and Hadoop use the rank (task number) info to m
Hi Niels!
There is a slight mismatch between your thoughts and the current design,
but not much.
What you describe (at the start of the job, the latest checkpoint is
automatically loaded) is basically what the high-availability setup does if
the master dies. The new master loads all jobs and cont
Hello,
According to
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Apache-Flink-Web-Dashboard-Completed-Job-history-td4067.html,
I cannot retrieve the job history from Dashboard after turnning off JM.
But as Fabian mentioned here,
"However, you can query all stats that are di
Hi Christophe,
I'm sorry that you ran into the issue. Right now, there is no better fix.
For the next releases, I'll take care that this doesn't happen again.
Maybe (you are the third user who (however implicitly) requested publicly
for a flink 0.10.2 release), we'll do a 0.10.2 before 1.0.0.
O
Hi all,
I have an issue with Flink 0.10.1, HBase and Guava, it seems to be related
to this JIRA:
https://issues.apache.org/jira/browse/FLINK-3158
If I removed the com.google.common.* class files from the jar file, it
works then.
Is there any other way to deal with this problem?
Thanks for your
Hi Lydia,
Since matrix multiplication is O(n^3), I would assume that it would simply
take 1000 times longer than the multiplication of the 100 x 100 matrix.
Have you waited so long to see whether it completes or is there another
problem?
Cheers,
Till
On Mon, Jan 25, 2016 at 2:13 PM, Lydia Ickler
Hi,
I want do a simple MatrixMultiplication and use the following code (see bottom).
For matrices 50x50 or 100x100 it is no problem. But already with matrices of
1000x1000 it would not work anymore and gets stuck in the joining part.
What am I doing wrong?
Best regards,
Lydia
package de.tube
Hi Saliya,
yes that is possible, however the requirements for reading a binary file
from local fs are basically the same as for reading it from HDSF.
In order to be able to start reading different sections of a file in
parallel, you need to know the different starting positions. This can be
done b
15 matches
Mail list logo