Roman Khachatryan created FLINK-34417:
-
Summary: Add JobID to logging MDC
Key: FLINK-34417
URL: https://issues.apache.org/jira/browse/FLINK-34417
Project: Flink
Issue Type: Improvement
Hello devs,
I would like to start a discussion regarding Apache Ozone FS support. The
jira [1] is stale for quite a while, but supporting it with some limitations
could
be done with minimal effort.
Ozone do not have truncate() impl, so it falls to the same category as
Hadoop < 2.7 [2], on Datast
Matthias Pohl created FLINK-34416:
-
Summary: "Local recovery and sticky scheduling end-to-end test"
still doesn't work with AdaptiveScheduler
Key: FLINK-34416
URL: https://issues.apache.org/jira/browse/FLINK-34416
Martijn Visser created FLINK-34415:
--
Summary: Move away from Kafka-Zookeeper based tests in favor of
Kafka-KRaft
Key: FLINK-34415
URL: https://issues.apache.org/jira/browse/FLINK-34415
Project: Flink
+1 (binding)
- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PRs
On Wed, Jan 31, 2024 at 10:41 AM Danny Cranmer wrote:
>
> Thanks for driving this Leonard!
>
> +1 (binding)
>
> - Rele
Hey Yaroslav,
Thanks for your response! Got it, so the need for UPDATE_BEFOREs will
depend on your sinks. I just watched the talk and it makes sense when you
think of the UPDATE_BEFOREs as retractions.
In the talk, Timo discusses how removing the need for UPDATE_BEFORE is an
optimization of sorts
We're only concerned with parallelism tuning here (with the same Flink
version). The plans will be compatible as long as the operator IDs keep the
same. Currently, this only holds if we do not break/create a chain, and we want
to make it hold when we break/create a chain as well. That's what th
Thanks Sergey,
It looks better now.
gpg --verify flink-connector-jdbc-3.1.2-1.18.jar.asc
gpg: assuming signed data in 'flink-connector-jdbc-3.1.2-1.18.jar'
gpg: Signature made Thu 1 Feb 10:54:45 2024 GMT
gpg:using RSA key F7529FAE24811A5C0DF3CA741596BBF0726835D8
gpg: Good sig
Rafał Trójczak created FLINK-34414:
--
Summary: EXACTLY_ONCE guarantee doesn't work properly for
Flink/Pulsar connector
Key: FLINK-34414
URL: https://issues.apache.org/jira/browse/FLINK-34414
Project:
Hi David
it looks like in your case you don't specify the jar itself and probably it
is not in current dir
so it should be something like that (assuming that both asc and jar file
are downloaded and are in current folder)
gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
flink-connector-jdbc-3.
Hi,
I was looking more at the asc files. I imported the keys and tried.
gpg --verify flink-connector-jdbc-3.1.2-1.16.jar.asc
gpg: no signed data
gpg: can't hash datafile: No data
This seems to be the same for all the asc file. It does not look right; am I
doing doing incorrect?
Kind regard
Hi Sergey,
Yes that makes sense, thanks,
Kind regards, David.
From: Sergey Nuyanzin
Date: Wednesday, 7 February 2024 at 11:41
To: dev@flink.apache.org
Subject: [EXTERNAL] Re: Flink jdbc connector rc3 for flink 1.18
Hi David,
Thanks for testing.
Yes the jars are built from the same sources and
+1 (non-binding)
I assume that thttps://github.com/apache/flink-web/pull/707 and be completed
after the release is out.
From: Martijn Visser
Date: Friday, 2 February 2024 at 08:38
To: dev@flink.apache.org
Subject: [EXTERNAL] Re: [VOTE] Release flink-connector-jdbc, release candidate
#3
+1 (bi
How exactly are you tuning SQL jobs without compiled plans while
ensuring that the resulting compiled plans are compatible? That's
explicitly not supported by Flink, hence why CompiledPlans exist.
If you change _anything_ the planner is free to generate a completely
different plan, where you hav
Martijn Visser created FLINK-34413:
--
Summary: Drop support for HBase v1
Key: FLINK-34413
URL: https://issues.apache.org/jira/browse/FLINK-34413
Project: Flink
Issue Type: Technical Debt
Hi all,
I will open a ticket to drop support for HBase v1. If there are no objections
brought forward next week, we'll move forward with dropping support for HBase
v1.
Best regards,
Martijn
On 2024/02/01 02:31:00 jialiang tan wrote:
> Hi Martijn, Ferenc
> Thanks all for driving this. As Feren
Hi,
> However, compiled plan is still too complicated for Flink newbies from my
> point of view.
I don't think that the compiled plan was ever positioned to be a
simple solution. If you want to have an easy approach, we have a
declarative solution in place with SQL and/or the Table API imho.
Be
Matthias Pohl created FLINK-34412:
-
Summary: ResultPartitionDeploymentDescriptorTest fails due to
fatal error (239 exit code)
Key: FLINK-34412
URL: https://issues.apache.org/jira/browse/FLINK-34412
Pr
Matthias Pohl created FLINK-34411:
-
Summary: "Wordcount on Docker test (custom fs plugin)" timed out
with some strange issue while setting the test up
Key: FLINK-34411
URL: https://issues.apache.org/jira/browse/FL
Hi Piotr,
Thanks for the comment. I agree that compiled plan is the ultimate tool for
Flink SQL if one wants to make any changes to
query later, and this FLIP indeed is not essential in this sense. However,
compiled plan is still too complicated for Flink newbies from my point of view.
As I men
20 matches
Mail list logo