Ted Yu created FLINK-3005:
-
Summary: Commons-collections object deserialization remote command
execution vulnerability
Key: FLINK-3005
URL: https://issues.apache.org/jira/browse/FLINK-3005
Project: Flink
Nick Dimiduk created FLINK-3004:
---
Summary: ForkableMiniCluster does not call RichFunction#open
Key: FLINK-3004
URL: https://issues.apache.org/jira/browse/FLINK-3004
Project: Flink
Issue Type: B
Hi Ali,
Flink uses different serializers for different data types. For example,
(boxed) primitives are serialized using dedicated serializers
(IntSerializer, StringSerializer, etc.) and the ProtocolEvent class is
recognized as a Pojo type and therefore serialized using Flink's
PojoSerializer.
Type
Thanks for the help.
TypeExtractor.getForObject(modelMapInit) did the job. Its possible that its
an IDE problem that .getClass() did not work. Intellij is a bit fiddly with
those things.
1) Making null the default value and initializing manually is probably more
> efficient, because otherwise the
+1
It's still hacky but we don't have better alternatives.
I'm not 100% sure if we can get rid of the parser. I think it's still a
nice way for quickly defining the fields of a POJO if the type extractor
fails to analyze it. But actually I don't know an example where it fails.
Regards,
Timo
Big +1
Of course, we had the initial talk about it… :D
> On 11 Nov 2015, at 19:33, Kirschnick, Johannes
> wrote:
>
> Hi Stephan,
>
> looking at the TypeHint, I got reminded on how Gson (a Json Parser) handles
> this situation of parsing generics.
>
> See here for an overview
> https://sites.
Hi,
I am a bit stuck with that dependency problem. Any help would be
appreciated as I would like to continue working on the formats. Thanks!
Best,
Martin
On 07.11.2015 17:28, Martin Junghanns wrote:
Hi Robert,
Thank you for the hints. I tried to narrow down the error:
Flink version: 0.10-S
Hi Stephan,
looking at the TypeHint, I got reminded on how Gson (a Json Parser) handles
this situation of parsing generics.
See here for an overview
https://sites.google.com/site/gson/gson-user-guide#TOC-Serializing-and-Deserializing-Generic-Types
Seems like this method was rediscovered :) And
LICENSE file looks good
NOTICE file looks good
Hash files looks good for source artifact
Signature file checked for source artifact
No third party executable in source artifact
Source compiled
Tests passed
Run Word Count with local and Apache Hadoop YARN 2.6.0 in session mode.
+1
On Tue, Nov 10,
Fabian,
I tried running it again and I noticed there were some more exceptions in
the log. I fixed those and I don’t see the original error but I do see
other ArrayIndexOutofBoundExceptions in the Kryo serializer code (I didn’t
even enable that yet like you suggested). Examples:
1)
10:49:36,331
It should suffice to do something like
"getRuntimeContext().getKeyValueState("microModelMap", new
HashMap().getClass(), null);"
Two more comments:
1) Making null the default value and initializing manually is probably more
efficient, because otherwise the empty map would have to be cloned each
t
Hey,
Yes what you wrote should work. You can alternatively use
TypeExtractor.getForObject(modelMapInit) to extract the tye information.
I also like to implement my custom type info for Hashmaps and the other
types and use that.
Cheers,
Gyula
Martin Neumann ezt írta (időpont: 2015. nov. 11., Sz
Hej,
What is the correct way of initializing a state-full operator that is using
a hashmap? modelMapInit.getClass() does not work neither does
HashMap.class. Do I have to implement my own TypeInformation class or is
there a simpler way?
cheers Martin
private OperatorState> microModelMap;
@Overr
Hi all!
We discovered a nice way to give TypeHints in the Google Cloud Dataflow
SDK, in a way that would fit Flink perfectly. I created a JIRA for that:
https://issues.apache.org/jira/browse/FLINK-2788
Since this is more powerful and type safe than the String/Parser way of
giving hints, I was won
I think you are touching on something important here.
There is a discussion/PullRequest about graceful shutdown of streaming jobs
(like stop
the sources and let the remainder of the streams run out).
With the work in progress to draw external checkpoint, it should be easy do
checkpoint-and-close.
Hey guys,
With recent discussions around being able to shutdown and restart streaming
jobs from specific checkpoints, there is another issue that I think needs
tackling.
As far as I understand when a streaming job finishes the tasks are not
notified for the last checkpoints and also jobs don't ta
Hi Ali,
I looked into this issue. This problem seems to be caused because the
deserializer reads more data than it should read.
This might happen because of two reasons:
1) the meta information of how much data is safe to read is incorrect.
2) the serializer and deserializer logic are not in s
+1
- Run ‘mvn verify’ command with Scala 2.10, Scala 2.11
- Run scala shell (both local and remote) with Scala 2.10, Scala 2.11
- Run batch word count example through scala shell
- Create and build project based on flink-quickstart-scala_2.11
> On Nov 11, 2015, at 7:28 PM, Maximilian Michels
I think now (before the 1.0 release) is the right time to clean it up.
Are you suggesting to have two execution configs for batch and streaming?
I'm not sure if we need to distinguish between pre-flight and runtime
options: From a user's perspective, it doesn't matter. For example the
serializer
Hi all!
The ExecutionConfig is a bit of a strange thing right now. It looks like it
became the place where everyone just put the stuff they want to somehow
push from the client to runtime, plus a random assortment of conflig flags.
As a result:
- The ExecutionConfig is available in batch and s
+1
We're getting there.
- Tested the new web frontend
- Verified LICENSE/NOTICE files
- Ran cluster streaming examples
- Ran cluster batch examples
- Run examples on YARN
- Run examples with Scala 2.11
- Examined log files
- Checked documentation in source release
- Checked for binaries in source
+1
- started the Scala 2.11 build of Flink on a YARN cluster (Hadoop 2.6.0)
- executed wordcount and batch wordcount on the streaming API
- tested the web interface
- implemented a job against the scala 2.11 dependencies in maven.
On Wed, Nov 11, 2015 at 10:19 AM, Aljoscha Krettek
wrote:
> Let
Ufuk Celebi created FLINK-3003:
--
Summary: Add container allocation timeout to YARN CLI
Key: FLINK-3003
URL: https://issues.apache.org/jira/browse/FLINK-3003
Project: Flink
Issue Type: Improvemen
See: https://issues.apache.org/jira/browse/FLINK-3002
On Wed, Nov 11, 2015 at 10:54 AM, Stephan Ewen wrote:
> "Either" an "Optional" types are quite useful.
>
> Let's add them to the core Java API.
>
> On Wed, Nov 11, 2015 at 10:00 AM, Vasiliki Kalavri <
> vasilikikala...@gmail.com> wrote:
>
>>
Stephan Ewen created FLINK-3002:
---
Summary: Add an EitherType to the Java API
Key: FLINK-3002
URL: https://issues.apache.org/jira/browse/FLINK-3002
Project: Flink
Issue Type: New Feature
Stephan Ewen created FLINK-3001:
---
Summary: Add Support for Java 8 Optional type
Key: FLINK-3001
URL: https://issues.apache.org/jira/browse/FLINK-3001
Project: Flink
Issue Type: New Feature
Ufuk Celebi created FLINK-3000:
--
Summary: Add ShutdownHook to YARN CLI to prevent lingering sessions
Key: FLINK-3000
URL: https://issues.apache.org/jira/browse/FLINK-3000
Project: Flink
Issue Ty
"Either" an "Optional" types are quite useful.
Let's add them to the core Java API.
On Wed, Nov 11, 2015 at 10:00 AM, Vasiliki Kalavri <
vasilikikala...@gmail.com> wrote:
> Thanks Fabian! I'll try that :)
>
> On 10 November 2015 at 22:31, Fabian Hueske wrote:
>
> > You could implement a Java Ei
Hi Ali,
one more thing. Did that error occur once or is it reproducable?
Thanks for your help,
Fabian
2015-11-11 9:50 GMT+01:00 Ufuk Celebi :
> Hey Ali,
>
> thanks for sharing the code. I assume that the custom
> ProtocolEvent, ProtocolDetailMap, Subscriber type are all PoJos. They
> should not
Fabian Hueske created FLINK-2999:
Summary: Support connected keyed streams
Key: FLINK-2999
URL: https://issues.apache.org/jira/browse/FLINK-2999
Project: Flink
Issue Type: Improvement
Let’s try this again… :D
+1
I think this one could be it. I did:
- verify the checksums of some of the release artifacts, I assume that
the rest is also OK
- test build for custom Hadoop versions 2.4, 2.5, 2.6
- verify that LICENSE/NOTICE are correct
- verify that licenses of dependencies are com
+1 for some way of declaring public interfaces as experimental.
> On 10 Nov 2015, at 22:24, Stephan Ewen wrote:
>
> I think we need anyways an annotation "@PublicExperimental".
>
> We can make this annotation such that it can be added to methods and can
> use that to declare
> Methods in an oth
Thanks Fabian! I'll try that :)
On 10 November 2015 at 22:31, Fabian Hueske wrote:
> You could implement a Java Either type (similar to Scala's Either) that
> either has a Message or the VertexState and a corresponding TypeInformation
> and TypeSerializer that serializes a byte flag to indicate
Hey Ali,
thanks for sharing the code. I assume that the custom
ProtocolEvent, ProtocolDetailMap, Subscriber type are all PoJos. They
should not be a problem. I think this is a bug in Flink 0.9.1.
Is it possible to re-run your program with the upcoming 0.10.0 (RC8)
version and report back?
1) Add
+1 from my side
- Compiled code against Hadoop 2.3.0 and Hadoop 2.6.0
- Executed all tests
- Executed manual tests, plus ChaosMonkeyITCase
- Checked the LICENSE and NOTICE files
- Tested streaming program with window implementation with custom session
timeout example
On Tue, Nov 10, 2015 at
35 matches
Mail list logo