Hi,
quick question: is the close method in sinks, e.g. BucketingSink, called
when a job gets cancelled or stopped?
We're trying to have the following: A custom stoppable streaming source
with a modified BucketingSink which moves the files from ephemeral storage
to S3 Frankfurt upon the closing of
Hi,
looks very similar to this bug:
https://issues.apache.org/jira/browse/FLINK-1916
Best,
Mihail
On 14.07.2015 14:09, Andra Lungu wrote:
Hi Flavio,
Could you also show us a code snippet?
On Tue, Jul 14, 2015 at 2:06 PM, Flavio Pompermaier
mailto:pomperma...@okkam.it>> wrote:
Hi to al
have any further questions.
Kind regards,
Max
On Thu, Jul 2, 2015 at 10:20 AM, Maximilian Michels <mailto:m...@apache.org>> wrote:
Hi Mihail,
Thanks for the code. I'm trying to reproduce the problem now.
On Wed, Jul 1, 2015 at 8:30 PM, Mihail Vieru
mailto:vi...@info
.
Needless to say, this doesn't occur in a local execution environment,
when writing to the local file system.
I would appreciate any input on this.
Best,
Mihail
On 30.06.2015 12:10, Mihail Vieru wrote:
Hi Till,
thank you for your reply.
I have the following code sn
un 30, 2015 at 12:01 PM, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de>>
wrote:
Hi,
the writeAsCsv method is not writing anything to HDFS (version
1.2.1) when the WriteMode is set to OVERWRITE.
A file is created but it's empty. And no trace of errors in the
Fl
Hi,
the writeAsCsv method is not writing anything to HDFS (version 1.2.1)
when the WriteMode is set to OVERWRITE.
A file is created but it's empty. And no trace of errors in the Flink or
Hadoop logs on all nodes in the cluster.
What could cause this issue? I really really need this feature..
esMapper so I can take a look?
Where is InitVerticesMapper called above?
Cheers,
Vasia.
On 26 June 2015 at 10:51, Mihail Vieru <mailto:vi...@informatik.hu-berlin.de>> wrote:
Hi Robert,
I'm using the same input data, as well as the same parameters I
use in the IDE'
APSP.java:74)/. I guess
that is code written by you or a library you are using.
Maybe the data you are using on the cluster is different from your
local test data?
Best,
Robert
On Thu, Jun 25, 2015 at 7:41 PM, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de>>
wrote:
Hi,
I get
Hi,
I get an ArrayIndexOutOfBoundsException when I run my job from a JAR in
the CLI.
This doesn't occur in the IDE.
I've build the JAR using the "maven-shade-plugin" and the pom.xml
configuration Robert has provided here:
https://stackoverflow.com/questions/30102523/linkage-failure-when-runn
Hi,
I had the same problem and setting the solution set to unmanaged helped:
VertexCentricConfiguration parameters = new VertexCentricConfiguration();
parameters.setSolutionSetUnmanagedMemory(false);
runVertexCentricIteration(..., parameters);
Best,
Mihail
On 17.06.2015 07:01, Sebastian wrote
replace the Cross operation by something else
(Join, CoGroup) or whether you can at least filter aggressively after
the Cross before the next operation.
On Mon, Jun 15, 2015 at 2:18 PM, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de>>
wrote:
Hi,
I get the following *"No sp
Hi,
I get the following *"No space left on device" IOException* when using
the following Cross operator.
The inputs for the operator are each just *10MB* in size (same input for
IN1 and IN2; 1000 tuples) and I get the exception after Flink manages to
fill *50GB* of SSD space and the partition
operator by calling
"setParallelism()" after the operation, for example
"result.writeAsText(path).setParallelism(1)".
On Wed, May 13, 2015 at 8:02 PM, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de>>
wrote:
Hi,
I need to write a data set to a singl
Hi,
I need to write a data set to a single file without setting the
parallelism to 1.
How can I achieve this?
Cheers,
Mihail
P.S.: it's for persisting intermediate results in loops and reading
those in the next iteration.
Which btw work for higher iteration counts with explicit persistence.
at is actually strictly caching the
intermediate data set. Flink sill support that internally a few
weeks (lets see if it is in time for 0.9, may not). Until then,
you need to explicitly persist the graph after each loop iteration.
On Wed, May 13, 2015 at 2:45 PM, Mihail Vieru
Hi,
I also get the EOFException on 0.9-SNAPSHOT when using a modified
SingleSourceShortestPaths on big graphs.
After I set the SolutionSet to "unmanaged" the job finishes. But this
only works until a certain point, over which I cannot increase the input
size further without getting an OutOfM
ng I notice is that your vertices and edges args are
flipped. Might be the source of error :-)
On 18 March 2015 at 23:04, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de>> wrote:
I'm also using 0 as sourceID. The exact program arguments:
0
/home/vi
un out of ideas...
What's your source ID parameter? I ran mine with 0.
About the result, you call both createVertexCentricIteration() and
runVertexCentricIteration() on the initialized graph, right?
On 18 March 2015 at 22:33, Mihail Vieru <mailto:vi...@informatik.hu-berlin.de>> wr
e relevant, but, just
in case, are you using the latest master?
Cheers,
V.
On 18 March 2015 at 19:21, Mihail Vieru <mailto:vi...@informatik.hu-berlin.de>> wrote:
Hi Vasia,
I have used a simple job (attached) to generate a file which looks
like this:
0 0
1 1
2
h 2015 at 16:54, Mihail Vieru <mailto:vi...@informatik.hu-berlin.de>> wrote:
Hi,
great! Thanks!
I really need this bug fixed because I'm laying the groundwork for
my Diplom thesis and I need to be sure that the Gelly API is
reliable and can handle large data
INK-1734>
Cheers,
Robert
On Tue, Mar 17, 2015 at 3:52 PM, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de>>
wrote:
Hi Robert,
thank you for your reply.
I'm starting the job from the Scala IDE. So only one JobManager
and one TaskManager in the same JVM.
I
e an issue but with a minimum of 5 pages and a
maximum of 8 it seems to be distributed fairly even to the different
partitions.
Cheers,
Robert
On Tue, Mar 17, 2015 at 1:25 AM, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de>>
wrote:
And the correct SSSPUnweighted attached.
And the correct SSSPUnweighted attached.
On 17.03.2015 01:23, Mihail Vieru wrote:
Hi,
I'm getting the following RuntimeException for an adaptation of the
SingleSourceShortestPaths example using the Gelly API (see
attachment). It's been adapted for unweighted graphs having vertices
Hi,
I'm getting the following RuntimeException for an adaptation of the
SingleSourceShortestPaths example using the Gelly API (see attachment).
It's been adapted for unweighted graphs having vertices with Long values.
As an input graph I'm using the social network graph (~200MB unpacked)
fro
22,671 INFO org.apache.flink.runtime.jobmanager.JobManager-
Starting embedded TaskManager for JobManager's LOCAL execution mode
On Mon, Mar 9, 2015 at 8:57 PM, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de>>
wrote:
Hi,
thank you for your answer. The JobManager web interface aka Flink
Dashboard is acces
ger required. Its running in the same JVM as the
JobManager.
The Flink Query Interface needs to be started using another main
class: org.apache.flink.client.WebFrontend.
Let me know if you need more help!
Robert
On Sat, Mar 7, 2015 at 6:08 PM, Mihail Vieru
mailto:vi...@informatik.hu-berlin.de&
Hey Robert,
the logging level was INFO and I didn't get any errors.
After I switched to DEBUG I get the following output:
2015-03-07 17:46:52 DEBUG MutableMetricsFactory:42 - field
org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSucces
Hi all,
I want to start Flink from the Scala IDE by using a local run
configuration similar to one I've used for Stratosphere 0.4.
The entry point for the Scala application is:
*org.apache.flink.runtime.jobmanager.JobManager*
With the program arguments:
--configDir
/home/vieru/dev/flink-fr
28 matches
Mail list logo