Re: [VOTE] Release Apache Flink 1.1.2 (RC1)

2016-08-31 Thread Robert Metzger
+1 to release:
- Checked versions in quickstart in staging repository
- tested the staging repository against my "flink 1.1.0 hadoop1" test job.
Also tested dependency download etc.
- Checked the source artifact for binaries (manually)
- Looked a bit though some of the binaries


It would be good if somebody could test that everything is starting
correctly (start-local.sh, yarn-session.sh)


On Tue, Aug 30, 2016 at 11:30 PM, Robert Metzger 
wrote:

> Dear Flink community,
>
> Please vote on releasing the following candidate as Apache Flink version
> 1.1.2.
>
> The commit to be voted on:
> f42f0c69 (http://git-wip-us.apache.org/repos/asf/flink/commit/f42f0c69)
>
>
> The release artifacts to be voted on can be found at:
> *http://people.apache.org/~rmetzger/flink-1.1.2-rc1/
> *
>
> The release artifacts are signed with the key with fingerprint D9839159:
> http://www.apache.org/dist/flink/KEYS
>
> The staging repository for this release can be found at:
> *https://repository.apache.org/content/repositories/orgapacheflink-1103
> *
>
> -
>
> The vote is open for the next 3 days and
> passes if a majority of at least three +1 PMC votes are cast.
>
>
> [ ] +1 Release this package as Apache Flink 1.1.2
> [ ] -1 Do not release this package, because ...
>
>
>


Re: [VOTE] Release Apache Flink 1.1.2 (RC1)

2016-08-31 Thread Fabian Hueske
I compared the 1.1.2-rc1 branch with the 1.1.1 release tag and went over
the diff.

- No new dependencies were added
- Versions of existing dependencies were not changed.
- Did not notice anything that would block the release.

+1 to release 1.1.2

2016-08-31 9:46 GMT+02:00 Robert Metzger :

> +1 to release:
> - Checked versions in quickstart in staging repository
> - tested the staging repository against my "flink 1.1.0 hadoop1" test job.
> Also tested dependency download etc.
> - Checked the source artifact for binaries (manually)
> - Looked a bit though some of the binaries
>
>
> It would be good if somebody could test that everything is starting
> correctly (start-local.sh, yarn-session.sh)
>
>
> On Tue, Aug 30, 2016 at 11:30 PM, Robert Metzger 
> wrote:
>
> > Dear Flink community,
> >
> > Please vote on releasing the following candidate as Apache Flink version
> > 1.1.2.
> >
> > The commit to be voted on:
> > f42f0c69 (http://git-wip-us.apache.org/repos/asf/flink/commit/f42f0c69)
> >
> >
> > The release artifacts to be voted on can be found at:
> > *http://people.apache.org/~rmetzger/flink-1.1.2-rc1/
> > *
> >
> > The release artifacts are signed with the key with fingerprint D9839159:
> > http://www.apache.org/dist/flink/KEYS
> >
> > The staging repository for this release can be found at:
> > *https://repository.apache.org/content/repositories/orgapacheflink-1103
> >  >*
> >
> > -
> >
> > The vote is open for the next 3 days and
> > passes if a majority of at least three +1 PMC votes are cast.
> >
> >
> > [ ] +1 Release this package as Apache Flink 1.1.2
> > [ ] -1 Do not release this package, because ...
> >
> >
> >
>


Fwd: NullPointerException in beam stream runner

2016-08-31 Thread Demin Alexey
Hi

Sorry if i mistake with mailing list.

After  BEAM-102 was solved in FlinkStreamingPipelineTranslator we have code
in visitPrimitiveTransform:


if (translator == null && applyCanTranslate(transform, node, translator)) {
  LOG.info(node.getTransform().getClass().toString());
  throw new UnsupportedOperationException(
  "The transform " + transform + " is currently not supported.");
}
applyStreamingTransform(transform, node, translator);


but applyCanTranslate and applyStreamingTransform always require NotNull
translator
as result if you try use side input in your code then you will cause NPE

Maybe Aljoscha Krettek could describe how this code must work?


Regards,
Alexey Diomin


Re: NullPointerException in beam stream runner

2016-08-31 Thread Aljoscha Krettek
Hi,
I think this is more suited for the Beam dev list. Nevertheless, I think
this is a coding error and the condition should be
if (translator != null && !applyCanTranslate(transform, node, translator))

With what program did you encounter an NPE, it seems to me that this should
rarely happen, at least it doesn't happen in all the Beam runner tests.

Cheers,
Aljoscha

On Wed, 31 Aug 2016 at 11:27 Demin Alexey  wrote:

> Hi
>
> Sorry if i mistake with mailing list.
>
> After  BEAM-102 was solved in FlinkStreamingPipelineTranslator we have code
> in visitPrimitiveTransform:
>
>
> if (translator == null && applyCanTranslate(transform, node, translator)) {
>   LOG.info(node.getTransform().getClass().toString());
>   throw new UnsupportedOperationException(
>   "The transform " + transform + " is currently not supported.");
> }
> applyStreamingTransform(transform, node, translator);
>
>
> but applyCanTranslate and applyStreamingTransform always require NotNull
> translator
> as result if you try use side input in your code then you will cause NPE
>
> Maybe Aljoscha Krettek could describe how this code must work?
>
>
> Regards,
> Alexey Diomin
>


Re: Streaming - memory management

2016-08-31 Thread Stephan Ewen
In streaming, memory is mainly needed for state (key/value state). The
exact representation depends on the chosen StateBackend.

State is explicitly released: For windows, state is cleaned up
automatically (firing / expiry), for user-defined state, keys have to be
explicitly cleared (clear() method) or in the future will have the option
to expire.

The heavy work horse for streaming state is currently RocksDB, which
internally uses native (off-heap) memory to keep the data.

Does that help?

Stephan


On Tue, Aug 30, 2016 at 11:52 PM, Roshan Naik 
wrote:

> As per the docs, in Batch mode, dynamic memory allocation is avoided by
> storing messages being processed in ByteBuffers via Unsafe methods.
>
> Couldn't find any docs  describing mem mgmt in Streamingn mode. So...
>
> - Am wondering if this is also the case with Streaming ?
>
> - If so, how does Flink detect that an object is no longer being used and
> can be reclaimed for reuse once again ?
>
> -roshan
>


Re: NullPointerException in beam stream runner

2016-08-31 Thread Demin Alexey
Hi

If we can change code on translator != null then next line (
applyStreamingTransform(transform, node, translator); ) will cause NPE

It's main problem why I don't understand code:

x = null;
if (x == null && f1_null_value_forbid(x)) { ..}
f2_null_value_forbid(x);

change (x == null) => (x !=null) simple change point of NPE


2016-08-31 13:43 GMT+04:00 Aljoscha Krettek :

> Hi,
> I think this is more suited for the Beam dev list. Nevertheless, I think
> this is a coding error and the condition should be
> if (translator != null && !applyCanTranslate(transform, node, translator))
>
> With what program did you encounter an NPE, it seems to me that this should
> rarely happen, at least it doesn't happen in all the Beam runner tests.
>
> Cheers,
> Aljoscha
>
> On Wed, 31 Aug 2016 at 11:27 Demin Alexey  wrote:
>
> > Hi
> >
> > Sorry if i mistake with mailing list.
> >
> > After  BEAM-102 was solved in FlinkStreamingPipelineTranslator we have
> code
> > in visitPrimitiveTransform:
> >
> >
> > if (translator == null && applyCanTranslate(transform, node,
> translator)) {
> >   LOG.info(node.getTransform().getClass().toString());
> >   throw new UnsupportedOperationException(
> >   "The transform " + transform + " is currently not supported.");
> > }
> > applyStreamingTransform(transform, node, translator);
> >
> >
> > but applyCanTranslate and applyStreamingTransform always require NotNull
> > translator
> > as result if you try use side input in your code then you will cause NPE
> >
> > Maybe Aljoscha Krettek could describe how this code must work?
> >
> >
> > Regards,
> > Alexey Diomin
> >
>


Re: NullPointerException in beam stream runner

2016-08-31 Thread Demin Alexey
Program for reproduce

https://gist.github.com/xhumanoid/d784a4463a45e68acb124709a521156e

1) options.setStreaming(false);  - we have NPE and i can't understand how
code work
2) options.setStreaming(true);  - pipeline can compile (he still have
error, but it's my incorrect work with window)


2016-08-31 13:53 GMT+04:00 Demin Alexey :

> Hi
>
> If we can change code on translator != null then next line (
> applyStreamingTransform(transform, node, translator); ) will cause NPE
>
> It's main problem why I don't understand code:
>
> x = null;
> if (x == null && f1_null_value_forbid(x)) { ..}
> f2_null_value_forbid(x);
>
> change (x == null) => (x !=null) simple change point of NPE
>
>
> 2016-08-31 13:43 GMT+04:00 Aljoscha Krettek :
>
>> Hi,
>> I think this is more suited for the Beam dev list. Nevertheless, I think
>> this is a coding error and the condition should be
>> if (translator != null && !applyCanTranslate(transform, node, translator))
>>
>> With what program did you encounter an NPE, it seems to me that this
>> should
>> rarely happen, at least it doesn't happen in all the Beam runner tests.
>>
>> Cheers,
>> Aljoscha
>>
>> On Wed, 31 Aug 2016 at 11:27 Demin Alexey  wrote:
>>
>> > Hi
>> >
>> > Sorry if i mistake with mailing list.
>> >
>> > After  BEAM-102 was solved in FlinkStreamingPipelineTranslator we have
>> code
>> > in visitPrimitiveTransform:
>> >
>> >
>> > if (translator == null && applyCanTranslate(transform, node,
>> translator)) {
>> >   LOG.info(node.getTransform().getClass().toString());
>> >   throw new UnsupportedOperationException(
>> >   "The transform " + transform + " is currently not supported.");
>> > }
>> > applyStreamingTransform(transform, node, translator);
>> >
>> >
>> > but applyCanTranslate and applyStreamingTransform always require NotNull
>> > translator
>> > as result if you try use side input in your code then you will cause NPE
>> >
>> > Maybe Aljoscha Krettek could describe how this code must work?
>> >
>> >
>> > Regards,
>> > Alexey Diomin
>> >
>>
>
>


Re: NullPointerException in beam stream runner

2016-08-31 Thread Aljoscha Krettek
Ah I see, an unbounded source, such as the Kafka source does not work in
batch mode (which streamStreaming(false) enables). The code should work in
streaming mode if you apply some window that is compatible with the
side-input window to the main input.

I think the code in streaming still works because there cannot be cases
where the translator is null right now. The correct check should be this,
though:
if (translator == null || !applyCanTranslate(transform, node, translator))

Cheers,
Aljoscha

On Wed, 31 Aug 2016 at 12:07 Demin Alexey  wrote:

> Program for reproduce
>
> https://gist.github.com/xhumanoid/d784a4463a45e68acb124709a521156e
>
> 1) options.setStreaming(false);  - we have NPE and i can't understand how
> code work
> 2) options.setStreaming(true);  - pipeline can compile (he still have
> error, but it's my incorrect work with window)
>
>
> 2016-08-31 13:53 GMT+04:00 Demin Alexey :
>
> > Hi
> >
> > If we can change code on translator != null then next line (
> > applyStreamingTransform(transform, node, translator); ) will cause NPE
> >
> > It's main problem why I don't understand code:
> >
> > x = null;
> > if (x == null && f1_null_value_forbid(x)) { ..}
> > f2_null_value_forbid(x);
> >
> > change (x == null) => (x !=null) simple change point of NPE
> >
> >
> > 2016-08-31 13:43 GMT+04:00 Aljoscha Krettek :
> >
> >> Hi,
> >> I think this is more suited for the Beam dev list. Nevertheless, I think
> >> this is a coding error and the condition should be
> >> if (translator != null && !applyCanTranslate(transform, node,
> translator))
> >>
> >> With what program did you encounter an NPE, it seems to me that this
> >> should
> >> rarely happen, at least it doesn't happen in all the Beam runner tests.
> >>
> >> Cheers,
> >> Aljoscha
> >>
> >> On Wed, 31 Aug 2016 at 11:27 Demin Alexey  wrote:
> >>
> >> > Hi
> >> >
> >> > Sorry if i mistake with mailing list.
> >> >
> >> > After  BEAM-102 was solved in FlinkStreamingPipelineTranslator we have
> >> code
> >> > in visitPrimitiveTransform:
> >> >
> >> >
> >> > if (translator == null && applyCanTranslate(transform, node,
> >> translator)) {
> >> >   LOG.info(node.getTransform().getClass().toString());
> >> >   throw new UnsupportedOperationException(
> >> >   "The transform " + transform + " is currently not supported.");
> >> > }
> >> > applyStreamingTransform(transform, node, translator);
> >> >
> >> >
> >> > but applyCanTranslate and applyStreamingTransform always require
> NotNull
> >> > translator
> >> > as result if you try use side input in your code then you will cause
> NPE
> >> >
> >> > Maybe Aljoscha Krettek could describe how this code must work?
> >> >
> >> >
> >> > Regards,
> >> > Alexey Diomin
> >> >
> >>
> >
> >
>


[jira] [Created] (FLINK-4537) ResourceManager registration with JobManager

2016-08-31 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4537:
-

 Summary: ResourceManager registration with JobManager
 Key: FLINK-4537
 URL: https://issues.apache.org/jira/browse/FLINK-4537
 Project: Flink
  Issue Type: Sub-task
Reporter: Maximilian Michels


The ResourceManager keeps tracks of all JobManager's which execute Jobs. When a 
new JobManager registered, its leadership status is checked through the 
HighAvailabilityServices. It will then be registered at the ResourceManager 
using the {{JobID}} provided with the initial registration message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: NullPointerException in beam stream runner

2016-08-31 Thread Demin Alexey
Thanks

with if (translator == null || !applyCanTranslate(transform, node,
translator))  all working as expectected


Regards,
Alexey Diomin


2016-08-31 14:12 GMT+04:00 Aljoscha Krettek :

> Ah I see, an unbounded source, such as the Kafka source does not work in
> batch mode (which streamStreaming(false) enables). The code should work in
> streaming mode if you apply some window that is compatible with the
> side-input window to the main input.
>
> I think the code in streaming still works because there cannot be cases
> where the translator is null right now. The correct check should be this,
> though:
> if (translator == null || !applyCanTranslate(transform, node, translator))
>
> Cheers,
> Aljoscha
>
> On Wed, 31 Aug 2016 at 12:07 Demin Alexey  wrote:
>
> > Program for reproduce
> >
> > https://gist.github.com/xhumanoid/d784a4463a45e68acb124709a521156e
> >
> > 1) options.setStreaming(false);  - we have NPE and i can't understand how
> > code work
> > 2) options.setStreaming(true);  - pipeline can compile (he still have
> > error, but it's my incorrect work with window)
> >
> >
> > 2016-08-31 13:53 GMT+04:00 Demin Alexey :
> >
> > > Hi
> > >
> > > If we can change code on translator != null then next line (
> > > applyStreamingTransform(transform, node, translator); ) will cause NPE
> > >
> > > It's main problem why I don't understand code:
> > >
> > > x = null;
> > > if (x == null && f1_null_value_forbid(x)) { ..}
> > > f2_null_value_forbid(x);
> > >
> > > change (x == null) => (x !=null) simple change point of NPE
> > >
> > >
> > > 2016-08-31 13:43 GMT+04:00 Aljoscha Krettek :
> > >
> > >> Hi,
> > >> I think this is more suited for the Beam dev list. Nevertheless, I
> think
> > >> this is a coding error and the condition should be
> > >> if (translator != null && !applyCanTranslate(transform, node,
> > translator))
> > >>
> > >> With what program did you encounter an NPE, it seems to me that this
> > >> should
> > >> rarely happen, at least it doesn't happen in all the Beam runner
> tests.
> > >>
> > >> Cheers,
> > >> Aljoscha
> > >>
> > >> On Wed, 31 Aug 2016 at 11:27 Demin Alexey  wrote:
> > >>
> > >> > Hi
> > >> >
> > >> > Sorry if i mistake with mailing list.
> > >> >
> > >> > After  BEAM-102 was solved in FlinkStreamingPipelineTranslator we
> have
> > >> code
> > >> > in visitPrimitiveTransform:
> > >> >
> > >> >
> > >> > if (translator == null && applyCanTranslate(transform, node,
> > >> translator)) {
> > >> >   LOG.info(node.getTransform().getClass().toString());
> > >> >   throw new UnsupportedOperationException(
> > >> >   "The transform " + transform + " is currently not
> supported.");
> > >> > }
> > >> > applyStreamingTransform(transform, node, translator);
> > >> >
> > >> >
> > >> > but applyCanTranslate and applyStreamingTransform always require
> > NotNull
> > >> > translator
> > >> > as result if you try use side input in your code then you will cause
> > NPE
> > >> >
> > >> > Maybe Aljoscha Krettek could describe how this code must work?
> > >> >
> > >> >
> > >> > Regards,
> > >> > Alexey Diomin
> > >> >
> > >>
> > >
> > >
> >
>


[jira] [Created] (FLINK-4538) Implement slot allocation protocol with JobMaster

2016-08-31 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4538:
-

 Summary: Implement slot allocation protocol with JobMaster
 Key: FLINK-4538
 URL: https://issues.apache.org/jira/browse/FLINK-4538
 Project: Flink
  Issue Type: Sub-task
  Components: Cluster Management
Reporter: Maximilian Michels






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4539) Duplicate/inconsistent logic for physical memory size in classes "Hardware" and "EnvironmentInformation"

2016-08-31 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-4539:
---

 Summary: Duplicate/inconsistent logic for physical memory size in 
classes "Hardware" and "EnvironmentInformation"
 Key: FLINK-4539
 URL: https://issues.apache.org/jira/browse/FLINK-4539
 Project: Flink
  Issue Type: Improvement
  Components: Local Runtime
Affects Versions: 1.1.2
Reporter: Stephan Ewen
Assignee: Stephan Ewen
Priority: Minor
 Fix For: 1.2.0


Both {{Hardware}} and {{EnvironmentInformation}} have some logic to determine 
the size of the physical memory. The {{EnvironmentInformation}} class uses that 
in a heuristic for the maximum heap size.

We should consolidate the logic in the {{Hardware}} class and call it from the 
{{EnvironmentInformation}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Flink 1.1.2 (RC1)

2016-08-31 Thread Maximilian Michels
+1 (binding)

Tested Flink 1.1.2 Scala 2.11 Hadoop2

- Ran ./flink run ../examples/streaming/Iteration.jar with
  - ./start-local.sh
  - ./start-cluster.sh
  - ./yarn-session.sh -n 2
  - ./yarn-session.sh -n 2 -d

- Test resuming and stopping of yarn session
  - ./yarn-session.sh -yid 
  - CTRL-C (disconnect) and "stop" (shutdown)

- Ran ./flink -m yarn-cluster -yn 2 ../examples/batch/WordCount.jar

On Wed, Aug 31, 2016 at 10:40 AM, Fabian Hueske  wrote:
> I compared the 1.1.2-rc1 branch with the 1.1.1 release tag and went over
> the diff.
>
> - No new dependencies were added
> - Versions of existing dependencies were not changed.
> - Did not notice anything that would block the release.
>
> +1 to release 1.1.2
>
> 2016-08-31 9:46 GMT+02:00 Robert Metzger :
>
>> +1 to release:
>> - Checked versions in quickstart in staging repository
>> - tested the staging repository against my "flink 1.1.0 hadoop1" test job.
>> Also tested dependency download etc.
>> - Checked the source artifact for binaries (manually)
>> - Looked a bit though some of the binaries
>>
>>
>> It would be good if somebody could test that everything is starting
>> correctly (start-local.sh, yarn-session.sh)
>>
>>
>> On Tue, Aug 30, 2016 at 11:30 PM, Robert Metzger 
>> wrote:
>>
>> > Dear Flink community,
>> >
>> > Please vote on releasing the following candidate as Apache Flink version
>> > 1.1.2.
>> >
>> > The commit to be voted on:
>> > f42f0c69 (http://git-wip-us.apache.org/repos/asf/flink/commit/f42f0c69)
>> >
>> >
>> > The release artifacts to be voted on can be found at:
>> > *http://people.apache.org/~rmetzger/flink-1.1.2-rc1/
>> > *
>> >
>> > The release artifacts are signed with the key with fingerprint D9839159:
>> > http://www.apache.org/dist/flink/KEYS
>> >
>> > The staging repository for this release can be found at:
>> > *https://repository.apache.org/content/repositories/orgapacheflink-1103
>> > > >*
>> >
>> > -
>> >
>> > The vote is open for the next 3 days and
>> > passes if a majority of at least three +1 PMC votes are cast.
>> >
>> >
>> > [ ] +1 Release this package as Apache Flink 1.1.2
>> > [ ] -1 Do not release this package, because ...
>> >
>> >
>> >
>>


[jira] [Created] (FLINK-4540) Detached job execution may prevent cluster shutdown

2016-08-31 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4540:
-

 Summary: Detached job execution may prevent cluster shutdown
 Key: FLINK-4540
 URL: https://issues.apache.org/jira/browse/FLINK-4540
 Project: Flink
  Issue Type: Bug
  Components: YARN Client
Affects Versions: 1.2.0, 1.1.2
Reporter: Maximilian Michels
Priority: Minor
 Fix For: 1.1.3, 1.2.0


There is a problem with the detached execution of jobs. This can prevent 
cluster shutdown 1) when eager jobs are executed, i.e. the job calls 
`collect()/count()`, and 2) when the user jar doesn't contain a job. 

1) For example, {{./flink -d -m yarn-cluster -yn 1 
../examples/batch/WordCount.jar}} will throw an exception and only disconnect 
the YarnClusterClient afterwards. In detached mode, the code assumes the 
cluster is shutdown through the {{shutdownAfterJob}} method which ensures that 
the YarnJobManager shuts down after the job completes. Due to the exception 
thrown when executing eager jobs, the jobmanager never receives a job and thus 
never shuts down the cluster. 

2) The same problem also occurs in detached execution when the user jar doesn't 
contain a job. 

A good solution would be to defer cluster startup until the job has been fully 
assembled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Flink 1.1.2 (RC1)

2016-08-31 Thread Maximilian Michels
Found a minor bug for detached job submissions but I wouldn't cancel
the release for it: https://issues.apache.org/jira/browse/FLINK-4540

On Wed, Aug 31, 2016 at 2:37 PM, Maximilian Michels  wrote:
> +1 (binding)
>
> Tested Flink 1.1.2 Scala 2.11 Hadoop2
>
> - Ran ./flink run ../examples/streaming/Iteration.jar with
>   - ./start-local.sh
>   - ./start-cluster.sh
>   - ./yarn-session.sh -n 2
>   - ./yarn-session.sh -n 2 -d
>
> - Test resuming and stopping of yarn session
>   - ./yarn-session.sh -yid 
>   - CTRL-C (disconnect) and "stop" (shutdown)
>
> - Ran ./flink -m yarn-cluster -yn 2 ../examples/batch/WordCount.jar
>
> On Wed, Aug 31, 2016 at 10:40 AM, Fabian Hueske  wrote:
>> I compared the 1.1.2-rc1 branch with the 1.1.1 release tag and went over
>> the diff.
>>
>> - No new dependencies were added
>> - Versions of existing dependencies were not changed.
>> - Did not notice anything that would block the release.
>>
>> +1 to release 1.1.2
>>
>> 2016-08-31 9:46 GMT+02:00 Robert Metzger :
>>
>>> +1 to release:
>>> - Checked versions in quickstart in staging repository
>>> - tested the staging repository against my "flink 1.1.0 hadoop1" test job.
>>> Also tested dependency download etc.
>>> - Checked the source artifact for binaries (manually)
>>> - Looked a bit though some of the binaries
>>>
>>>
>>> It would be good if somebody could test that everything is starting
>>> correctly (start-local.sh, yarn-session.sh)
>>>
>>>
>>> On Tue, Aug 30, 2016 at 11:30 PM, Robert Metzger 
>>> wrote:
>>>
>>> > Dear Flink community,
>>> >
>>> > Please vote on releasing the following candidate as Apache Flink version
>>> > 1.1.2.
>>> >
>>> > The commit to be voted on:
>>> > f42f0c69 (http://git-wip-us.apache.org/repos/asf/flink/commit/f42f0c69)
>>> >
>>> >
>>> > The release artifacts to be voted on can be found at:
>>> > *http://people.apache.org/~rmetzger/flink-1.1.2-rc1/
>>> > *
>>> >
>>> > The release artifacts are signed with the key with fingerprint D9839159:
>>> > http://www.apache.org/dist/flink/KEYS
>>> >
>>> > The staging repository for this release can be found at:
>>> > *https://repository.apache.org/content/repositories/orgapacheflink-1103
>>> > >> >*
>>> >
>>> > -
>>> >
>>> > The vote is open for the next 3 days and
>>> > passes if a majority of at least three +1 PMC votes are cast.
>>> >
>>> >
>>> > [ ] +1 Release this package as Apache Flink 1.1.2
>>> > [ ] -1 Do not release this package, because ...
>>> >
>>> >
>>> >
>>>


[jira] [Created] (FLINK-4541) Support for SQL NOT IN operator

2016-08-31 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4541:
---

 Summary: Support for SQL NOT IN operator
 Key: FLINK-4541
 URL: https://issues.apache.org/jira/browse/FLINK-4541
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther


This should work:

{code}
def main(args: Array[String]): Unit = {

// set up execution environment
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env)

val input = env.fromElements(WC("hello", 1), WC("hello", 1), WC("ciao", 1))

// register the DataSet as table "WordCount"
tEnv.registerDataSet("WordCount", input, 'word, 'frequency)

tEnv.registerTable("WordCount2", tEnv.fromDataSet(input, 'word, 
'frequency).select('word).filter('word !== "hello"))

// run a SQL query on the Table and retrieve the result as a new Table
val table = tEnv.sql("SELECT word, SUM(frequency) FROM WordCount WHERE word 
NOT IN (SELECT word FROM WordCount2) GROUP BY word")

table.toDataSet[WC].print()
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4542) Add support for MULTISET type and operations

2016-08-31 Thread Timo Walther (JIRA)
Timo Walther created FLINK-4542:
---

 Summary: Add support for MULTISET type and operations
 Key: FLINK-4542
 URL: https://issues.apache.org/jira/browse/FLINK-4542
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Timo Walther
Priority: Minor


Add the MULTISET type and add operations like:

MULTISET UNION, MULTISET UNION ALL, MULTISET EXCEPT, MULTISET EXCEPT ALL, 
MULTISET INTERSECT, MULTISET INTERSECT ALL, CARDINALITY, ELEMENT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4543) Race Deadlock in SpilledSubpartitionViewTest

2016-08-31 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-4543:
---

 Summary: Race Deadlock in SpilledSubpartitionViewTest
 Key: FLINK-4543
 URL: https://issues.apache.org/jira/browse/FLINK-4543
 Project: Flink
  Issue Type: Improvement
  Components: Network
Affects Versions: 1.1.2
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.2.0


The test deadlocked (Java level deadlock) with the following stack traces:

{code}
Found one Java-level deadlock:
=
"pool-1-thread-2":
  waiting to lock monitor 0x7fec2c006168 (object 0xef661c20, a 
java.lang.Object),
  which is held by "IOManager reader thread #1"
"IOManager reader thread #1":
  waiting to lock monitor 0x7fec2c005ea8 (object 0xef62c8a8, a 
java.lang.Object),
  which is held by "pool-1-thread-2"

Java stack information for the threads listed above:
===
"pool-1-thread-2":
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO.notifyError(SpilledSubpartitionViewAsyncIO.java:309)
- waiting to lock <0xef661c20> (a java.lang.Object)
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO.onAvailableBuffer(SpilledSubpartitionViewAsyncIO.java:261)
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO.access$300(SpilledSubpartitionViewAsyncIO.java:42)
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO$BufferProviderCallback.onEvent(SpilledSubpartitionViewAsyncIO.java:380)
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO$BufferProviderCallback.onEvent(SpilledSubpartitionViewAsyncIO.java:366)
at 
org.apache.flink.runtime.io.network.util.TestPooledBufferProvider$PooledBufferProviderRecycler.recycle(TestPooledBufferProvider.java:135)
- locked <0xef62c8a8> (a java.lang.Object)
at 
org.apache.flink.runtime.io.network.buffer.Buffer.recycle(Buffer.java:118)
- locked <0xef9597c0> (a java.lang.Object)
at 
org.apache.flink.runtime.io.network.util.TestConsumerCallback$RecyclingCallback.onBuffer(TestConsumerCallback.java:72)
at 
org.apache.flink.runtime.io.network.util.TestSubpartitionConsumer.call(TestSubpartitionConsumer.java:87)
at 
org.apache.flink.runtime.io.network.util.TestSubpartitionConsumer.call(TestSubpartitionConsumer.java:39)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"IOManager reader thread #1":
at 
org.apache.flink.runtime.io.network.util.TestPooledBufferProvider$PooledBufferProviderRecycler.recycle(TestPooledBufferProvider.java:126)
- waiting to lock <0xef62c8a8> (a java.lang.Object)
at 
org.apache.flink.runtime.io.network.buffer.Buffer.recycle(Buffer.java:118)
- locked <0xefa016f0> (a java.lang.Object)
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO.returnBufferFromIOThread(SpilledSubpartitionViewAsyncIO.java:275)
- locked <0xef661c20> (a java.lang.Object)
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO.access$100(SpilledSubpartitionViewAsyncIO.java:42)
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO$IOThreadCallback.requestSuccessful(SpilledSubpartitionViewAsyncIO.java:343)
at 
org.apache.flink.runtime.io.network.partition.SpilledSubpartitionViewAsyncIO$IOThreadCallback.requestSuccessful(SpilledSubpartitionViewAsyncIO.java:333)
at 
org.apache.flink.runtime.io.disk.iomanager.AsynchronousFileIOChannel.handleProcessedBuffer(AsynchronousFileIOChannel.java:199)
at 
org.apache.flink.runtime.io.disk.iomanager.BufferReadRequest.requestDone(AsynchronousFileIOChannel.java:435)
at 
org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$ReaderThread.run(IOManagerAsync.java:408)

Found 1 deadlock.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[DISCUSS] Automated code quality check in Flink

2016-08-31 Thread Ivan Mushketyk
Hi!

Flink uses Travis and Jenkis to check if new PR passes unit tests, but
there are other tools that can automatically check code quality like:
https://www.codacy.com/
https://codeclimate.com/

These tools promise to track test coverage, check code style, detect code
duplication and so on.
Since they are free for Open Source projects, it seems like it could be
quite beneficial to enable them.
Have you ever tried any of them? What do you think about this?

Best regards,
Ivan.


[jira] [Created] (FLINK-4544) TaskManager metrics are vulnerable to custom JMX bean installation

2016-08-31 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-4544:
---

 Summary: TaskManager metrics are vulnerable to custom JMX bean 
installation
 Key: FLINK-4544
 URL: https://issues.apache.org/jira/browse/FLINK-4544
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Affects Versions: 1.1.2
Reporter: Stephan Ewen
 Fix For: 1.2.0, 1.1.3


The TaskManager's CPU load magic may fail when JMX providers are overwritten.

The TaskManager logic checks if the class 
{{com.sun.management.OperatingSystemMXBean}} is available. If yes, it assumes 
that the {{ManagementFactory.getOperatingSystemMXBean()}} is of that type. That 
is not necessarily the case.

This is visible in the Cassandra tests, as Cassandra overrides the JMX provider 
- every heartbeat causes an exception that is logged (See below), flooding the 
log, killing the heartbeat message.

I would also suggest to move the entire metrics code out of the {{TaskManager}} 
class into a dedicated class {{TaskManagerJvmMetrics}}. That one can, with a 
static method, install the metrics into the TaskManager's metric group.


Sample stack trace when default platform beans are overridden:

{code}
23914 [flink-akka.actor.default-dispatcher-3] WARN  
org.apache.flink.runtime.taskmanager.TaskManager  - Error retrieving CPU Load 
through OperatingSystemMXBean
java.lang.IllegalArgumentException: object is not an instance of declaring class
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$3$$anonfun$getValue$2.apply(TaskManager.scala:2351)
at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$3$$anonfun$getValue$2.apply(TaskManager.scala:2351)
at scala.Option.map(Option.scala:145)
at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$3.getValue(TaskManager.scala:2351)
at 
org.apache.flink.runtime.taskmanager.TaskManager$$anon$3.getValue(TaskManager.scala:2348)
at 
com.codahale.metrics.json.MetricsModule$GaugeSerializer.serialize(MetricsModule.java:32)
at 
com.codahale.metrics.json.MetricsModule$GaugeSerializer.serialize(MetricsModule.java:20)
at 
com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeFields(MapSerializer.java:616)
at 
com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:519)
at 
com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:31)
at 
com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130)
at 
com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:2444)
at 
com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:355)
at 
com.fasterxml.jackson.core.JsonGenerator.writeObjectField(JsonGenerator.java:1442)
at 
com.codahale.metrics.json.MetricsModule$MetricRegistrySerializer.serialize(MetricsModule.java:186)
at 
com.codahale.metrics.json.MetricsModule$MetricRegistrySerializer.serialize(MetricsModule.java:171)
at 
com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130)
at 
com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3631)
at 
com.fasterxml.jackson.databind.ObjectMapper.writeValueAsBytes(ObjectMapper.java:3022)
at 
org.apache.flink.runtime.taskmanager.TaskManager.sendHeartbeatToJobManager(TaskManager.scala:1278)
at 
org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:309)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at 
org.apache.flink.runtime.testingUtils.TestingTaskManagerLike$$anonfun$handleTestingMessage$1.applyOrElse(TestingTaskManagerLike.scala:65)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
at 
org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at 
org.apache.flink.runtime.LogMes

Re: Streaming - memory management

2016-08-31 Thread Vinay Patil
Hi Stephan,

Just wanted to jump into this discussion regarding state.

So do you mean that if we maintain user-defined state (for non-window
operators), then if we do  not clear it explicitly will the data for that
key remains in RocksDB.

What happens in case of checkpoint ? I read in the documentation that after
the checkpoint happens the rocksDB data is pushed to the desired location
(hdfs or s3 or other fs), so for user-defined state does the data still
remain in RocksDB after checkpoint ?

Correct me if I have misunderstood this concept

For one of our use we were going for this, but since I read the above part
in documentation so we are going for Cassandra now (to store records and
query them for a special case)





Regards,
Vinay Patil

On Wed, Aug 31, 2016 at 4:51 AM, Stephan Ewen  wrote:

> In streaming, memory is mainly needed for state (key/value state). The
> exact representation depends on the chosen StateBackend.
>
> State is explicitly released: For windows, state is cleaned up
> automatically (firing / expiry), for user-defined state, keys have to be
> explicitly cleared (clear() method) or in the future will have the option
> to expire.
>
> The heavy work horse for streaming state is currently RocksDB, which
> internally uses native (off-heap) memory to keep the data.
>
> Does that help?
>
> Stephan
>
>
> On Tue, Aug 30, 2016 at 11:52 PM, Roshan Naik 
> wrote:
>
> > As per the docs, in Batch mode, dynamic memory allocation is avoided by
> > storing messages being processed in ByteBuffers via Unsafe methods.
> >
> > Couldn't find any docs  describing mem mgmt in Streamingn mode. So...
> >
> > - Am wondering if this is also the case with Streaming ?
> >
> > - If so, how does Flink detect that an object is no longer being used and
> > can be reclaimed for reuse once again ?
> >
> > -roshan
> >
>


Re: Streaming - memory management

2016-08-31 Thread Fabian Hueske
Hi Vinaj,

if you use user-defined state, you have to manually clear it.
Otherwise, it will stay in the state backend (heap or RocksDB) until the
job goes down (planned or due to an OOM error).

This is esp. important to keep in mind, when using keyed state.
If you have an unbounded, evolving key space you will likely run
out-of-memory.
The job will constantly add state for each new key but won't be able to
clean up the state for "expired" keys.

You could implement a clean-up mechanism this if you implement a custom
stream operator.
However this is a very low level interface and requires solid understanding
of the internals like timestamps, watermarks and the checkpointing
mechanism.

The community is currently working on a state expiry feature (state will be
discarded if not requested or updated for x minutes).

Regarding the second question: Does state remain local after checkpointing?
Yes, the local state is only copied to the remote FS (HDFS, S3, ...) but
remains in the operator. So the state is not gone after a checkpoint is
completed.

Hope this helps,
Fabian

2016-08-31 18:17 GMT+02:00 Vinay Patil :

> Hi Stephan,
>
> Just wanted to jump into this discussion regarding state.
>
> So do you mean that if we maintain user-defined state (for non-window
> operators), then if we do  not clear it explicitly will the data for that
> key remains in RocksDB.
>
> What happens in case of checkpoint ? I read in the documentation that after
> the checkpoint happens the rocksDB data is pushed to the desired location
> (hdfs or s3 or other fs), so for user-defined state does the data still
> remain in RocksDB after checkpoint ?
>
> Correct me if I have misunderstood this concept
>
> For one of our use we were going for this, but since I read the above part
> in documentation so we are going for Cassandra now (to store records and
> query them for a special case)
>
>
>
>
>
> Regards,
> Vinay Patil
>
> On Wed, Aug 31, 2016 at 4:51 AM, Stephan Ewen  wrote:
>
> > In streaming, memory is mainly needed for state (key/value state). The
> > exact representation depends on the chosen StateBackend.
> >
> > State is explicitly released: For windows, state is cleaned up
> > automatically (firing / expiry), for user-defined state, keys have to be
> > explicitly cleared (clear() method) or in the future will have the option
> > to expire.
> >
> > The heavy work horse for streaming state is currently RocksDB, which
> > internally uses native (off-heap) memory to keep the data.
> >
> > Does that help?
> >
> > Stephan
> >
> >
> > On Tue, Aug 30, 2016 at 11:52 PM, Roshan Naik 
> > wrote:
> >
> > > As per the docs, in Batch mode, dynamic memory allocation is avoided by
> > > storing messages being processed in ByteBuffers via Unsafe methods.
> > >
> > > Couldn't find any docs  describing mem mgmt in Streamingn mode. So...
> > >
> > > - Am wondering if this is also the case with Streaming ?
> > >
> > > - If so, how does Flink detect that an object is no longer being used
> and
> > > can be reclaimed for reuse once again ?
> > >
> > > -roshan
> > >
> >
>


Re: Streaming - memory management

2016-08-31 Thread Stephan Ewen
If you use RocksDB, you will not run into OutOfMemory errors.

On Wed, Aug 31, 2016 at 6:34 PM, Fabian Hueske  wrote:

> Hi Vinaj,
>
> if you use user-defined state, you have to manually clear it.
> Otherwise, it will stay in the state backend (heap or RocksDB) until the
> job goes down (planned or due to an OOM error).
>
> This is esp. important to keep in mind, when using keyed state.
> If you have an unbounded, evolving key space you will likely run
> out-of-memory.
> The job will constantly add state for each new key but won't be able to
> clean up the state for "expired" keys.
>
> You could implement a clean-up mechanism this if you implement a custom
> stream operator.
> However this is a very low level interface and requires solid understanding
> of the internals like timestamps, watermarks and the checkpointing
> mechanism.
>
> The community is currently working on a state expiry feature (state will be
> discarded if not requested or updated for x minutes).
>
> Regarding the second question: Does state remain local after checkpointing?
> Yes, the local state is only copied to the remote FS (HDFS, S3, ...) but
> remains in the operator. So the state is not gone after a checkpoint is
> completed.
>
> Hope this helps,
> Fabian
>
> 2016-08-31 18:17 GMT+02:00 Vinay Patil :
>
> > Hi Stephan,
> >
> > Just wanted to jump into this discussion regarding state.
> >
> > So do you mean that if we maintain user-defined state (for non-window
> > operators), then if we do  not clear it explicitly will the data for that
> > key remains in RocksDB.
> >
> > What happens in case of checkpoint ? I read in the documentation that
> after
> > the checkpoint happens the rocksDB data is pushed to the desired location
> > (hdfs or s3 or other fs), so for user-defined state does the data still
> > remain in RocksDB after checkpoint ?
> >
> > Correct me if I have misunderstood this concept
> >
> > For one of our use we were going for this, but since I read the above
> part
> > in documentation so we are going for Cassandra now (to store records and
> > query them for a special case)
> >
> >
> >
> >
> >
> > Regards,
> > Vinay Patil
> >
> > On Wed, Aug 31, 2016 at 4:51 AM, Stephan Ewen  wrote:
> >
> > > In streaming, memory is mainly needed for state (key/value state). The
> > > exact representation depends on the chosen StateBackend.
> > >
> > > State is explicitly released: For windows, state is cleaned up
> > > automatically (firing / expiry), for user-defined state, keys have to
> be
> > > explicitly cleared (clear() method) or in the future will have the
> option
> > > to expire.
> > >
> > > The heavy work horse for streaming state is currently RocksDB, which
> > > internally uses native (off-heap) memory to keep the data.
> > >
> > > Does that help?
> > >
> > > Stephan
> > >
> > >
> > > On Tue, Aug 30, 2016 at 11:52 PM, Roshan Naik 
> > > wrote:
> > >
> > > > As per the docs, in Batch mode, dynamic memory allocation is avoided
> by
> > > > storing messages being processed in ByteBuffers via Unsafe methods.
> > > >
> > > > Couldn't find any docs  describing mem mgmt in Streamingn mode. So...
> > > >
> > > > - Am wondering if this is also the case with Streaming ?
> > > >
> > > > - If so, how does Flink detect that an object is no longer being used
> > and
> > > > can be reclaimed for reuse once again ?
> > > >
> > > > -roshan
> > > >
> > >
> >
>


[jira] [Created] (FLINK-4545) Flink automatically manages TM network buffer

2016-08-31 Thread Zhenzhong Xu (JIRA)
Zhenzhong Xu created FLINK-4545:
---

 Summary: Flink automatically manages TM network buffer
 Key: FLINK-4545
 URL: https://issues.apache.org/jira/browse/FLINK-4545
 Project: Flink
  Issue Type: Wish
Reporter: Zhenzhong Xu



Currently, the number of network buffer per task manager is preconfigured and 
the memory is pre-allocated through taskmanager.network.numberOfBuffers config. 
In a Job DAG with shuffle phase, this number can go up very high depends on the 
TM cluster size. The formula for calculating the buffer count is documented 
here 
(https://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#configuring-the-network-buffers).
  

#slots-per-TM^2 * #TMs * 4

In a standalone deployment, we may need to control the task manager cluster 
size dynamically and then leverage the up-coming Flink feature to support 
scaling job parallelism/rescaling at runtime. 
If the buffer count config is static at runtime and cannot be changed without 
restarting task manager process, this may add latency and complexity for 
scaling process. I am wondering if there is already any discussion around 
whether the network buffer should be automatically managed by Flink or at least 
expose some API to allow it to be reconfigured. Let me know if there is any 
existing JIRA that I should follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re:[DISCUSS] Automated code quality check in Flink

2016-08-31 Thread 时某人
+1Indeed we  should pay attention to the code quality.The  confused 
module list also need better design.
At 2016-08-31 22:37:35, "Ivan Mushketyk"  wrote:
>Hi!
>
>Flink uses Travis and Jenkis to check if new PR passes unit tests, but
>there are other tools that can automatically check code quality like:
>https://www.codacy.com/
>https://codeclimate.com/
>
>These tools promise to track test coverage, check code style, detect code
>duplication and so on.
>Since they are free for Open Source projects, it seems like it could be
>quite beneficial to enable them.
>Have you ever tried any of them? What do you think about this?
>
>Best regards,
>Ivan.


[jira] [Created] (FLINK-4546) Remove STREAM keyword and use batch sql parser for stream jobs

2016-08-31 Thread Jark Wu (JIRA)
Jark Wu created FLINK-4546:
--

 Summary: Remove STREAM keyword and use batch sql parser for stream 
jobs
 Key: FLINK-4546
 URL: https://issues.apache.org/jira/browse/FLINK-4546
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Jark Wu
Assignee: Jark Wu


It is about to unify Batch SQL and Stream SQL grammar, esp. removing STREAM 
keyword in Stream SQL. 

detailed discuss mailing list: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-thoughts-about-unify-Stream-SQL-and-Batch-SQL-grammer-td13060.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4547) Return same object when call connect method in AkkaRpcService using same address and same rpc gateway class

2016-08-31 Thread zhangjing (JIRA)
zhangjing created FLINK-4547:


 Summary: Return same object when call connect method in 
AkkaRpcService using same address and same rpc gateway class
 Key: FLINK-4547
 URL: https://issues.apache.org/jira/browse/FLINK-4547
 Project: Flink
  Issue Type: Sub-task
Reporter: zhangjing


Now every time call connect method in AkkaRpcService class using same address 
and same rpc gateway class, the return gateway object is totally different with 
each other which equals and hashcode are not same. 
Maybe it’s reasonable to have the same result  when using the same address and 
same Gateway class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)