zhangjing created FLINK-4547:
Summary: Return same object when call connect method in
AkkaRpcService using same address and same rpc gateway class
Key: FLINK-4547
URL: https://issues.apache.org/jira/browse/FLINK-4547
Jark Wu created FLINK-4546:
--
Summary: Remove STREAM keyword and use batch sql parser for stream
jobs
Key: FLINK-4546
URL: https://issues.apache.org/jira/browse/FLINK-4546
Project: Flink
Issue Type:
+1Indeed we should pay attention to the code quality.The confused
module list also need better design.
At 2016-08-31 22:37:35, "Ivan Mushketyk" wrote:
>Hi!
>
>Flink uses Travis and Jenkis to check if new PR passes unit tests, but
>there are other tools that can automatically check code quality
Zhenzhong Xu created FLINK-4545:
---
Summary: Flink automatically manages TM network buffer
Key: FLINK-4545
URL: https://issues.apache.org/jira/browse/FLINK-4545
Project: Flink
Issue Type: Wish
If you use RocksDB, you will not run into OutOfMemory errors.
On Wed, Aug 31, 2016 at 6:34 PM, Fabian Hueske wrote:
> Hi Vinaj,
>
> if you use user-defined state, you have to manually clear it.
> Otherwise, it will stay in the state backend (heap or RocksDB) until the
> job goes down (planned or
Hi Vinaj,
if you use user-defined state, you have to manually clear it.
Otherwise, it will stay in the state backend (heap or RocksDB) until the
job goes down (planned or due to an OOM error).
This is esp. important to keep in mind, when using keyed state.
If you have an unbounded, evolving key s
Hi Stephan,
Just wanted to jump into this discussion regarding state.
So do you mean that if we maintain user-defined state (for non-window
operators), then if we do not clear it explicitly will the data for that
key remains in RocksDB.
What happens in case of checkpoint ? I read in the documen
Stephan Ewen created FLINK-4544:
---
Summary: TaskManager metrics are vulnerable to custom JMX bean
installation
Key: FLINK-4544
URL: https://issues.apache.org/jira/browse/FLINK-4544
Project: Flink
Hi!
Flink uses Travis and Jenkis to check if new PR passes unit tests, but
there are other tools that can automatically check code quality like:
https://www.codacy.com/
https://codeclimate.com/
These tools promise to track test coverage, check code style, detect code
duplication and so on.
Since
Stephan Ewen created FLINK-4543:
---
Summary: Race Deadlock in SpilledSubpartitionViewTest
Key: FLINK-4543
URL: https://issues.apache.org/jira/browse/FLINK-4543
Project: Flink
Issue Type: Improvem
Timo Walther created FLINK-4542:
---
Summary: Add support for MULTISET type and operations
Key: FLINK-4542
URL: https://issues.apache.org/jira/browse/FLINK-4542
Project: Flink
Issue Type: Improvem
Timo Walther created FLINK-4541:
---
Summary: Support for SQL NOT IN operator
Key: FLINK-4541
URL: https://issues.apache.org/jira/browse/FLINK-4541
Project: Flink
Issue Type: Improvement
Found a minor bug for detached job submissions but I wouldn't cancel
the release for it: https://issues.apache.org/jira/browse/FLINK-4540
On Wed, Aug 31, 2016 at 2:37 PM, Maximilian Michels wrote:
> +1 (binding)
>
> Tested Flink 1.1.2 Scala 2.11 Hadoop2
>
> - Ran ./flink run ../examples/streaming
Maximilian Michels created FLINK-4540:
-
Summary: Detached job execution may prevent cluster shutdown
Key: FLINK-4540
URL: https://issues.apache.org/jira/browse/FLINK-4540
Project: Flink
I
+1 (binding)
Tested Flink 1.1.2 Scala 2.11 Hadoop2
- Ran ./flink run ../examples/streaming/Iteration.jar with
- ./start-local.sh
- ./start-cluster.sh
- ./yarn-session.sh -n 2
- ./yarn-session.sh -n 2 -d
- Test resuming and stopping of yarn session
- ./yarn-session.sh -yid
- CTRL-C (
Stephan Ewen created FLINK-4539:
---
Summary: Duplicate/inconsistent logic for physical memory size in
classes "Hardware" and "EnvironmentInformation"
Key: FLINK-4539
URL: https://issues.apache.org/jira/browse/FLINK-45
Maximilian Michels created FLINK-4538:
-
Summary: Implement slot allocation protocol with JobMaster
Key: FLINK-4538
URL: https://issues.apache.org/jira/browse/FLINK-4538
Project: Flink
Iss
Thanks
with if (translator == null || !applyCanTranslate(transform, node,
translator)) all working as expectected
Regards,
Alexey Diomin
2016-08-31 14:12 GMT+04:00 Aljoscha Krettek :
> Ah I see, an unbounded source, such as the Kafka source does not work in
> batch mode (which streamStreamin
Maximilian Michels created FLINK-4537:
-
Summary: ResourceManager registration with JobManager
Key: FLINK-4537
URL: https://issues.apache.org/jira/browse/FLINK-4537
Project: Flink
Issue Ty
Ah I see, an unbounded source, such as the Kafka source does not work in
batch mode (which streamStreaming(false) enables). The code should work in
streaming mode if you apply some window that is compatible with the
side-input window to the main input.
I think the code in streaming still works bec
Program for reproduce
https://gist.github.com/xhumanoid/d784a4463a45e68acb124709a521156e
1) options.setStreaming(false); - we have NPE and i can't understand how
code work
2) options.setStreaming(true); - pipeline can compile (he still have
error, but it's my incorrect work with window)
2016-
Hi
If we can change code on translator != null then next line (
applyStreamingTransform(transform, node, translator); ) will cause NPE
It's main problem why I don't understand code:
x = null;
if (x == null && f1_null_value_forbid(x)) { ..}
f2_null_value_forbid(x);
change (x == null) => (x !=nul
In streaming, memory is mainly needed for state (key/value state). The
exact representation depends on the chosen StateBackend.
State is explicitly released: For windows, state is cleaned up
automatically (firing / expiry), for user-defined state, keys have to be
explicitly cleared (clear() method
Hi,
I think this is more suited for the Beam dev list. Nevertheless, I think
this is a coding error and the condition should be
if (translator != null && !applyCanTranslate(transform, node, translator))
With what program did you encounter an NPE, it seems to me that this should
rarely happen, at l
Hi
Sorry if i mistake with mailing list.
After BEAM-102 was solved in FlinkStreamingPipelineTranslator we have code
in visitPrimitiveTransform:
if (translator == null && applyCanTranslate(transform, node, translator)) {
LOG.info(node.getTransform().getClass().toString());
throw new Unsuppo
I compared the 1.1.2-rc1 branch with the 1.1.1 release tag and went over
the diff.
- No new dependencies were added
- Versions of existing dependencies were not changed.
- Did not notice anything that would block the release.
+1 to release 1.1.2
2016-08-31 9:46 GMT+02:00 Robert Metzger :
> +1 t
+1 to release:
- Checked versions in quickstart in staging repository
- tested the staging repository against my "flink 1.1.0 hadoop1" test job.
Also tested dependency download etc.
- Checked the source artifact for binaries (manually)
- Looked a bit though some of the binaries
It would be good i
27 matches
Mail list logo