Hi All,
I'm hereby opening a vote for FLIP-272 Generalized delegation token support.
The related documents can be found here:
- FLIP on wiki: [1]
- Discussion thread: [2]
Voting will be open for at least 72 hours (since weekend is involved EOB
Monday is planned).
BR,
G
[1]
https://cwiki.apache.
Cross posting answer from SO:
|BroadcastState| is an operator state not a |KeyedState|. The referenced
docs refer to a |KeyedState|:
Compression works on the granularity of key-groups in keyed state,
Probably docs could be more explicit about this behaviour.
Unfortunately as far as I know
Hi Ferenc,
I think you're good to go, since no comments were there. Do let us know if
you need any help :)
Thanks,
Martijn
On Mon, Oct 24, 2022 at 7:47 PM Ferenc Csaky
wrote:
> Hi,
>
> just pinging this thread in case someone missed it and has any opinion
> about the discussed actions.
>
> Be
Martijn Visser created FLINK-30050:
--
Summary: [Umbrella] Externalize Flink connectors
Key: FLINK-30050
URL: https://issues.apache.org/jira/browse/FLINK-30050
Project: Flink
Issue Type: Impro
Martijn Visser created FLINK-30051:
--
Summary: Create repository for Kafka connector
Key: FLINK-30051
URL: https://issues.apache.org/jira/browse/FLINK-30051
Project: Flink
Issue Type: Sub-tas
Martijn Visser created FLINK-30052:
--
Summary: Move existing Kafka connector code from Flink repo to
dedicated Kafka repo
Key: FLINK-30052
URL: https://issues.apache.org/jira/browse/FLINK-30052
Projec
Martijn Visser created FLINK-30054:
--
Summary: Move existing Pulsar connector code from Flink repo to
dedicated Pulsar repo
Key: FLINK-30054
URL: https://issues.apache.org/jira/browse/FLINK-30054
Proj
Martijn Visser created FLINK-30053:
--
Summary: Create and initialize repository for Pulsar connector
Key: FLINK-30053
URL: https://issues.apache.org/jira/browse/FLINK-30053
Project: Flink
Iss
Martijn Visser created FLINK-30055:
--
Summary: Move existing Cassandra connector code from Flink repo to
dedicated Cassandra repo
Key: FLINK-30055
URL: https://issues.apache.org/jira/browse/FLINK-30055
Sergey Nuyanzin created FLINK-30056:
---
Summary: Make polling for metadata no more than specified timeout
by using new Consumer#poll(Duration)
Key: FLINK-30056
URL: https://issues.apache.org/jira/browse/FLINK-300
Hi All,
+1 to go. Since we are refurbishing the HBase area maybe we can move the
token provider into HBase base project.
This would fit into the high level effort to extract everything into
external connectors. If you do this and facing any issues just ping me :)
BR,
G
On Thu, Nov 17, 2022 at 9
Martijn Visser created FLINK-30057:
--
Summary: Create and initialize repository for Google Cloud PubSub
connector
Key: FLINK-30057
URL: https://issues.apache.org/jira/browse/FLINK-30057
Project: Flin
Martijn Visser created FLINK-30058:
--
Summary: Move existing Google PubSub connector code from Flink
repo to dedicated Google PubSub repo
Key: FLINK-30058
URL: https://issues.apache.org/jira/browse/FLINK-30058
Martijn Visser created FLINK-30059:
--
Summary: Create and initialize repository for JDBC connector
Key: FLINK-30059
URL: https://issues.apache.org/jira/browse/FLINK-30059
Project: Flink
Issue
Martijn Visser created FLINK-30061:
--
Summary: Create and initialize repository for HBase connector
Key: FLINK-30061
URL: https://issues.apache.org/jira/browse/FLINK-30061
Project: Flink
Issu
Martijn Visser created FLINK-30062:
--
Summary: Move existing HBase connector code from Flink repo to
dedicated HBase repo
Key: FLINK-30062
URL: https://issues.apache.org/jira/browse/FLINK-30062
Projec
Martijn Visser created FLINK-30060:
--
Summary: Move existing JDBC connector code from Flink repo to
dedicated JDBC repo
Key: FLINK-30060
URL: https://issues.apache.org/jira/browse/FLINK-30060
Project:
Martijn Visser created FLINK-30064:
--
Summary: Move existing Hive connector code from Flink repo to
dedicated Hive repo
Key: FLINK-30064
URL: https://issues.apache.org/jira/browse/FLINK-30064
Project:
Martijn Visser created FLINK-30063:
--
Summary: Create and initialize repository for Hive connector
Key: FLINK-30063
URL: https://issues.apache.org/jira/browse/FLINK-30063
Project: Flink
Issue
Martijn Visser created FLINK-30065:
--
Summary: Move Firehose connector code from Flink repo to dedicated
AWS repo
Key: FLINK-30065
URL: https://issues.apache.org/jira/browse/FLINK-30065
Project: Flink
Martijn Visser created FLINK-30066:
--
Summary: Move Kinesis connector code from Flink repo to dedicated
AWS repo
Key: FLINK-30066
URL: https://issues.apache.org/jira/browse/FLINK-30066
Project: Flink
Chesnay Schepler created FLINK-30067:
Summary: DelegatingConfiguration#set should return itself
Key: FLINK-30067
URL: https://issues.apache.org/jira/browse/FLINK-30067
Project: Flink
Issu
Piotr Nowojski created FLINK-30068:
--
Summary: Allow users to configure what to do with errors while
committing transactions during recovery in KafkaSink
Key: FLINK-30068
URL: https://issues.apache.org/jira/browse
Juntao Hu created FLINK-30069:
-
Summary: Expected prune behavior for matches with same priority
Key: FLINK-30069
URL: https://issues.apache.org/jira/browse/FLINK-30069
Project: Flink
Issue Type:
Piotr Nowojski created FLINK-30070:
--
Summary: Create savepoints without side effects
Key: FLINK-30070
URL: https://issues.apache.org/jira/browse/FLINK-30070
Project: Flink
Issue Type: New Fe
Hi,
I've just realized that it seems like KafkaSink is missing a feature that
was present in FlinkKafkaProducer and which makes it hard to use with
savepoints [1].
Best, Piotrek
[1] https://issues.apache.org/jira/browse/FLINK-30068
pt., 11 lis 2022 o 13:53 Jing Ge napisał(a):
> Hi all,
>
> Th
Jane Chan created FLINK-30071:
-
Summary: Throw exception at compile time when sequence field does
not exist
Key: FLINK-30071
URL: https://issues.apache.org/jira/browse/FLINK-30071
Project: Flink
Nico Kruber created FLINK-30072:
---
Summary: Cannot assign instance of SerializedLambda to field
KeyGroupStreamPartitioner.keySelector
Key: FLINK-30072
URL: https://issues.apache.org/jira/browse/FLINK-30072
Roman Khachatryan created FLINK-30073:
-
Summary: Managed memory can be wasted if rocksdb memory is
fixed-per-slot
Key: FLINK-30073
URL: https://issues.apache.org/jira/browse/FLINK-30073
Project: F
Hi all,
I was talking at a meetup yesterday about Apache Flink.
There is this Google Presentation that was created earlier and never has
been updated. As I had to come up with a presentation anyhow I used this
and updated it. See the result in [1].
First, I'd like to get feedback on the updated p
Chesnay Schepler created FLINK-30074:
Summary: ES e2e tests are never run
Key: FLINK-30074
URL: https://issues.apache.org/jira/browse/FLINK-30074
Project: Flink
Issue Type: Technical Debt
I agree, the current calculation logic is already complicated.
I just think that not using managed memory complicates the memory model
even further.
But as I mentioned earlier, both approaches have their pros and cons, so
I'll update the proposal to use unmanaged memory.
Thanks!
Regards,
Roman
Hi Jing,
Thanks for opening the discussion. I am not sure we are ready to
remove FlinkKafkaConsumer.
The reason is that for existing users of FlinkKafkaConsumer who rely
on KafkaDeserializationSchema::isEndOfStream(),
there is currently no migration path for them to use FlinkKafkaConsumer.
This i
Hi Jing,
I realized that there is a missing feature with KafkaSource that might
prevent existing users of FlinkKafkaConsumer from migrating to
FlinkKafkaConsumer.
I have put more details in the discussion thread.
Thanks,
Dong
On Wed, Nov 16, 2022 at 12:37 AM Jing Ge wrote:
> Hi,
>
> As discus
Hi Gyula,
If I understand correctly, this autopilot proposal is an experimental
feature and its configs/metrics are not mature enough to provide backward
compatibility yet. And the proposal provides high-level ideas of the
algorithm but it is probably too complicated to explain it end-to-end.
On
Hi Dong!
This is not an experimental feature proposal. The implementation of the
prototype is still in an experimental phase but by the time the FLIP,
initial prototype and review is done, this should be in a good stable first
version.
This proposal is pretty general as autoscalers/tuners get as f
Hi Fabian,
+1 (binding)
- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PR
- Started a cluster and the Flink SQL client, ran multiple jobs/statements
On Wed, Nov 16, 2022 at 10:24 AM
Hey Mark,
Apache Flink doesn't support Golang, but you can look at Beam's Golang SDK:
https://beam.apache.org/documentation/sdks/go/. Beam jobs can use Apache
Flink as a runner: https://beam.apache.org/documentation/runners/flink/
On Wed, Nov 16, 2022 at 8:47 PM Mark Lee wrote:
> Hi,
>
> I fo
Hi Dawid,
Thanks for getting back to me.
And yes, I read "Compression works on the granularity of key-groups in keyed
state” as meaning “When compressing keyed state, it’s done per key-group” and
not “Compression only works on keyed state” :)
I agree that "KeyedState should be preferred in maj
Thanks, Roman~
Best,
Xintong
On Thu, Nov 17, 2022 at 10:56 PM Khachatryan Roman <
khachatryan.ro...@gmail.com> wrote:
> I agree, the current calculation logic is already complicated.
> I just think that not using managed memory complicates the memory model
> even further.
>
> But as I mentio
zhanglu153 created FLINK-30075:
--
Summary: Failed to load data to the cache after the hive lookup
join task is restarted
Key: FLINK-30075
URL: https://issues.apache.org/jira/browse/FLINK-30075
Project: Fl
zck created FLINK-30076:
---
Summary: hive join mysql error
Key: FLINK-30076
URL: https://issues.apache.org/jira/browse/FLINK-30076
Project: Flink
Issue Type: Bug
Components: Table SQL / Runtime
hanjie created FLINK-30077:
--
Summary: k8s jobmanager pod repeated restart
Key: FLINK-30077
URL: https://issues.apache.org/jira/browse/FLINK-30077
Project: Flink
Issue Type: Improvement
Com
Xuannan Su created FLINK-30078:
--
Summary: Temporal join should finish when the left table is
bounded and finished
Key: FLINK-30078
URL: https://issues.apache.org/jira/browse/FLINK-30078
Project: Flink
Hi, dev.
I want to start a discussion about the FLIP-273: Improve Catalog API to
Support ALTER TABLE syntax[1]. The motivation of the FLIP is the current
Catalog API is difficult for some Catalog to alter the table. For example,
the Catalog only exposes
void alterTable(ObjectPath tablePath, C
Mingliang Liu created FLINK-30079:
-
Summary: Stop using deprecated TM options in doc
Key: FLINK-30079
URL: https://issues.apache.org/jira/browse/FLINK-30079
Project: Flink
Issue Type: Improve
Jingsong Lee created FLINK-30080:
Summary: Introduce public programming api and dependency jar for
table store
Key: FLINK-30080
URL: https://issues.apache.org/jira/browse/FLINK-30080
Project: Flink
Mingliang Liu created FLINK-30081:
-
Summary: Local executor can not accept different
jvm-overhead.min/max values
Key: FLINK-30081
URL: https://issues.apache.org/jira/browse/FLINK-30081
Project: Flink
Jingsong Lee created FLINK-30082:
Summary: Enable write-buffer-spillable by default only for object
storage
Key: FLINK-30082
URL: https://issues.apache.org/jira/browse/FLINK-30082
Project: Flink
49 matches
Mail list logo