An alternative way that may not be applicable in your case:
For Sopremo, all types implemented a common interface. When a package is
loaded, the Sopremo package manager scans the jar and looks for classes
implementing the interfaces (quite fast, because not the entire class must
be loaded). All ty
Hi Kostas,
Thanks for putting this into the wiki. I added the JIRA link for the
off-heap memory. Now the wiki displays:
> Error rendering macro 'jira' :
> com.atlassian.confluence.macro.MacroExecutionException:
> java.lang.RuntimeException: Not Found
If persistent, we should report this.
Is t
@Stephan: Yes, you are summarizing it correctly.
I'll assign FLINK-1417 to myself and implement it as discussed here (once I
have resolved the other issues assigned to me)
There is one additional point we forgot in the discussion so far: We are
initializing Kryo with twitter/chill's "ScalaKryoInst
I like the idea to automatically figure out which types are used by a
program and to register them at Kryo. Thus, +1 for this idea.
On Tue, Jan 20, 2015 at 11:34 AM, Robert Metzger
wrote:
> @Stephan: Yes, you are summarizing it correctly.
> I'll assign FLINK-1417 to myself and implement it as di
+1 for program analysis from me too...
Should be doable also on a lower level (e.g. analysis of compiled *.class
files) with some off-the-shelf libraries, right?
2015-01-20 11:39 GMT+01:00 Till Rohrmann :
> I like the idea to automatically figure out which types are used by a
> program and to re
On 20 Jan 2015, at 11:51, Alexander Alexandrov
wrote:
> +1 for program analysis from me too...
>
> Should be doable also on a lower level (e.g. analysis of compiled *.class
> files) with some off-the-shelf libraries, right?
Yes. There was a prototypical implementation using byte code analysis
Are we talking about the types for the input/output of operators or also
types that are used inside UDFs?
Operator I/O type classes are known, so we don't need static code
analysis for that. For types inside UDFs I can add that requirement to
FLINK-1319.
On 20.01.2015 11:51, Alexander Alexand
I think we are talking about the Operator I/O types, as types used
internally only by the UDFs should not be serialized.
2015-01-20 12:27 GMT+01:00 Timo Walther :
> Are we talking about the types for the input/output of operators or also
> types that are used inside UDFs?
> Operator I/O type clas
Yes, that sounds very reasonable.
On Jan 20, 2015 6:40 AM, "Stephan Ewen" wrote:
> Yes, I agree that the Avro serializer should be available by default. That
> is one case of a typical type that should work out of the box, given that
> we support Avro file formats.
>
> Let me summarize how I unde
Hi there,
I cannot figure out how the Scala base types (e.g. scala.Int, scala.Double,
etc.) are mapped to the Flink runtime.
It seems that there are not treated the same as their Java counterparts
(e.g. java.lang.Integer, java.lang.Double). For example, if I write the
following code:
val inputFo
Just to clarify in order to spare us some time in the discussion. I
*deliberately* want to use Flink Java API from Scala with Scala core types.
2015-01-20 18:53 GMT+01:00 Alexander Alexandrov <
alexander.s.alexand...@gmail.com>:
> Hi there,
>
> I cannot figure out how the Scala base types (e.g. s
Hi there,
I have to implement some generic fallback strategy on top of a more
abstract DSL in order to keep datasets in a temp space (e.g. Tachyon). My
implementation is based on the 0.8 release. At the moment I am undecided
between three options:
- BinaryInputFormat / BinaryOutputFormat
-
Alexander Alexandrov created FLINK-1422:
---
Summary: Missing usage example for "withParameters"
Key: FLINK-1422
URL: https://issues.apache.org/jira/browse/FLINK-1422
Project: Flink
Issue
Hi everyone,
I'm running into some problems implementing a Accumulator for
returning a list of a DataSet.
https://github.com/mxm/flink/tree/count/collect
Basically, it works fine in this test case:
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
Integer[] in
I would not recommend using the AvroInput/Output format because its meant
to be used with Avro types (usually POJOs generated from an Avro schema).
I would use the TypeSerializerInputFormat / OutputFormat. Then you can be
sure that its able to read/write all types supported by our system.
On Tue,
Hi,
it seems that our master is currently not building. See:
https://travis-ci.org/apache/flink/jobs/47689754
We need to come up with a good solution to notify dev@flink when builds on
Travis are failing.
We also had unstable builds recently due to too short akka timeouts and it
took some time t
On 20 Jan 2015, at 23:57, Robert Metzger wrote:
> There are certainly ways to fix this. Right now, the best approach is
> probably setting up a REST 2 e-mail service somewhere which is mailing to
> our dev@ list (
> http://docs.travis-ci.com/user/notifications/#Webhook-notification).
That sound
I think its just a missing import.
Maybe we can use Google AppEngine for that. It seems that their free
offering is sufficient for our purpose:
https://cloud.google.com/pricing/#app-engine. It also allows sending emails.
I guess its hard to get the token for the "apache" user. Maybe there is is
a
On 21 Jan 2015, at 00:19, Robert Metzger wrote:
> I think its just a missing import.
Yes.
> Maybe we can use Google AppEngine for that. It seems that their free
> offering is sufficient for our purpose:
> https://cloud.google.com/pricing/#app-engine. It also allows sending emails.
> I guess it
Carsten Brandt created FLINK-1423:
-
Summary: No Tags for new release on the github repo
Key: FLINK-1423
URL: https://issues.apache.org/jira/browse/FLINK-1423
Project: Flink
Issue Type: Task
Carsten Brandt created FLINK-1424:
-
Summary: bin/flink run does not recognize -c parameter anymore
Key: FLINK-1424
URL: https://issues.apache.org/jira/browse/FLINK-1424
Project: Flink
Issue T
21 matches
Mail list logo