Hey,
I understand it's a little bit late to bring this up, but we have a Hive
dialect bug for which the PR [1] is ready to merge. So I wonder whether we
could include it in 1.11.1? The issue is not a blocker, but I believe it's
good to have in the bug fix release.
[1] https://github.com/apache/fl
Xintong Song created FLINK-18620:
Summary: Unify behaviors of active resource managers
Key: FLINK-18620
URL: https://issues.apache.org/jira/browse/FLINK-18620
Project: Flink
Issue Type: Impro
Hi Thomas,
Thanks for your further profiling information and glad to see we already
finalized the location to cause the regression.
Actually I was also suspicious of the point of #snapshotState in previous
discussions since it indeed cost much time to block normal operator processing.
Based on
Sorry for the delay.
I confirmed that the regression is due to the sink (unsurprising, since
another job with the same consumer, but not the producer, runs as expected).
As promised I did CPU profiling on the problematic application, which gives
more insight into the regression [1]
The screensho
I only quickly skimmed the Hadoop docs and found this (although it is
not documented very well I might add). If this does not do the trick,
I'd suggest to reach out to the Hadoop project, since we're using their
S3 filesystem.
On 16/07/2020 19:32, nikita Balakrishnan wrote:
Hey Chesnay,
Than
Hey Chesnay,
Thank you for getting back with that! I tried setting that too, it still
gives me the same exception. Is there something else that I'm missing?
I also have fs.s3a.bucket..server-side-encryption-algorithm=SSE-KMS
and fs.s3a.bucket..server-side-encryption.key set.
Is there no need to s
Seth Wiesman created FLINK-18619:
Summary: Update training to use WatermarkStrategy
Key: FLINK-18619
URL: https://issues.apache.org/jira/browse/FLINK-18619
Project: Flink
Issue Type: Improvem
Hi,
Thanks a lot for your discussions.
I think Aljoscha makes good suggestions here! Those problematic APIs should
not be added to the new Python DataStream API.
Only one item I want to add based on the reply from Shuiqiang:
I would also tend to keep the readTextFile() method. Apart from print(),
Xintong Song,
- Which version of Flink is used?*1.10*
- Which deployment mode is used? *Standalone*
- Which cluster mode is used? *Job*
- Do you mean you have a 4core16gb node for each task manager, and each
task manager has 4 slots? *Yeah*. *There are totally 3 taskmanagers in
Thank you all for the discussion!
Here are my comments:
2) I agree we should support Expression as a computed column. But I'm in
favor of Leonard's point that maybe we can also support SQL string
expression as a computed column.
Because it also keeps aligned with DDL. The concern for Expression i
Chesnay Schepler created FLINK-18618:
Summary: Docker e2e tests are failing on CI
Key: FLINK-18618
URL: https://issues.apache.org/jira/browse/FLINK-18618
Project: Flink
Issue Type: Improv
dongjie.shi created FLINK-18617:
---
Summary: run flink with openjdk 11 get
java.lang.UnsupportedOperationException: sun.misc.Unsafe or
java.nio.DirectByteBuffer.(long, int) not available error
Key: FLINK-18617
URL: h
dongjie.shi created FLINK-18615:
---
Summary: run flink with openjdk11 get
java.lang.UnsupportedOperationException: sun.misc.Unsafe or
java.nio.DirectByteBuffer.(long, int) not available
Key: FLINK-18615
URL: https://
Jingsong Lee created FLINK-18616:
Summary: Add SHOW CURRENT DDLs
Key: FLINK-18616
URL: https://issues.apache.org/jira/browse/FLINK-18616
Project: Flink
Issue Type: New Feature
Compo
Hi Xuanna,
Thanks for the detailed design doc, it described clearly how the API looks
and how to interact with Flink runtime.
However, the part which relates to SQL's optimizer is kind of blurry. To be
more precise, I have following questions:
1. How do you identify the CachedTable? I can imagine
Roman Khachatryan created FLINK-18614:
-
Summary: Performance regression 2020.07.13 (most benchmarks)
Key: FLINK-18614
URL: https://issues.apache.org/jira/browse/FLINK-18614
Project: Flink
Hi Aljoscha,
Thank you for your valuable comments! I agree with you that there is some
optimization space for existing API and can be applied to the python
DataStream API implementation.
According to your comments, I have concluded them into the following parts:
1. SingleOutputStreamOperator and
Thanks for the discussion.
Descriptor lacks the watermark and the computed column is too long.
1) +1 for just `column(...)`
2) +1 for being consistent with Table API, the Java Table API should be
Expression DSL. We don't need pure string support, users should just use
DDL instead. I think this i
hehuiyuan created FLINK-18613:
-
Summary: How to support retract & upsert sink for a TableSink ?
Key: FLINK-18613
URL: https://issues.apache.org/jira/browse/FLINK-18613
Project: Flink
Issue Type:
Please try configuring :
fs.s3a.etag.checksum.enabled: true
On 16/07/2020 03:11, nikita Balakrishnan wrote:
Hello team,
I’m developing a system where we are trying to sink to an immutable s3
bucket. This bucket has server side encryption set as KMS. The DataStream
sink works perfectly fine wh
20 matches
Mail list logo