Yes, makes sense.
Ismael
On Mon, Nov 20, 2017 at 4:49 PM, Gwen Shapira wrote:
> Agree. I don't know that it actually matters. They can keep using whatever
> they are using now since we don't plan on breaking the protocol.
>
> But since the issue does keep coming up, I figured we'll need a clear
Agree. I don't know that it actually matters. They can keep using whatever
they are using now since we don't plan on breaking the protocol.
But since the issue does keep coming up, I figured we'll need a clear
message around what the removal means and what users need to do.
Gwen
On Mon, Nov 20,
It's worth emphasizing that the impact to such users is independent of
whether we remove the old high-level consumer in 2.0.0 or not. They are
unable to use the message format introduced in 0.11.0 or security features
today.
Ismael
On Mon, Nov 20, 2017 at 4:11 PM, Gwen Shapira wrote:
> >
> >
>
>
>
> Personally, I suspect that those who absolutely need a rolling migration
> and cannot handle a short period of downtime while doing a migration
> probably have in-house experts on Kafka who are familiar with the issues
> and willing to figure out a solution. The rest of the world can generall
Thanks for the update Onur. Are you and the other committers and
contributors from LinkedIn planning to push this over the line?
Ismael
On Fri, Nov 10, 2017 at 9:53 PM, Onur Karaman
wrote:
> Hey everyone. Regarding the status of KIP-125, just a heads up: I have an
> implementation of KIP-125 (K
Hey everyone. Regarding the status of KIP-125, just a heads up: I have an
implementation of KIP-125 (KAFKA-4513) here:
https://github.com/onurkaraman/kafka/commit/3b5448006ab70ba2b0b5e177853d191d0f777452
The code might need to be rebased. The steps described in the KIP are a bit
involved. Other th
Re: migrating offsets for old Scala consumers.
I work in the python world, so haven't directly used the old high level
consumer, but from what I understand the underlying problem remains the
migration of zookeeper offsets to the __consumer_offsets topic.
We've used a slightly modified version of
Hi Gwen,
A KIP has been proposed, but it is stalled:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-125%3A+
ZookeeperConsumerConnector+to+KafkaConsumer+Migration+and+Rollback
Unless the interested parties pick that up, we would drop support without a
rolling upgrade path. Users would be a
Last time we tried deprecating the Scala consumer, there were concerns
about a lack of upgrade path. There is no rolling upgrade, and migrating
offsets is not trivial (and not documented).
Did anything change in that regard? Or are we planning on dropping support
without an upgrade path?
On Fri,
Thanks Ismael, the proposal looks good to me.
A side note regarding: https://issues.apache.org/jira/browse/KAFKA-5637,
could we resolve this ticket sooner than later to make clear about the code
deprecation and support duration when moving from 1.0.x to 2.0.x?
Guozhang
On Fri, Nov 10, 2017 at
Features for 2.0.0 will be known after 1.1.0 is released in February 2018.
We are still doing the usual time-based release process[1].
I am raising this well ahead of time because of the potential impact of
removing the old Scala clients (particularly the old high-level consumer)
and dropping supp
Hi Ismael,
Are there any new features other than the language specific changes that
are being planned for 2.0.0? Also, when 2.x gets released, will the 1.x
series see continued bug fixes and releases in the community or is the
plan to have one single main version that gets continuous updates a
Hi Apurva,
I agree about KIP-185 (assuming the vote passes). To clarify, my list was
not meant to be exhaustive, just the items with highest compatibility
impact justifying the major bump. I expect we will have many other great
KIPs. :)
Ismael
On 10 Nov 2017 12:57 am, "Apurva Mehta" wrote:
I t
I think this is a good idea and your proposed changes look good.
I also think that this might be a good time to adopt KIP-185 (
https://cwiki.apache.org/confluence/display/KAFKA/KIP-185%3A+Make+exactly+once+in+order+delivery+per+partition+the+default+producer+setting),
and make the idempotent prod
That's correct, Tom. We can only remove deprecated APIs in major releases
since it's a breaking change.
Ismael
On 9 Nov 2017 11:48 am, "Tom Bentley" wrote:
Hi Stephane,
I think the version number rules are based on semantic versioning, so Kafka
can't remove even deprecated APIs in a minor rel
Hi Stephane,
I think the version number rules are based on semantic versioning, so Kafka
can't remove even deprecated APIs in a minor release (it is a breaking
change, after all). Therefore until Kafka 2.0 we will have to carry the
weight of the deprecated APIs, and Java 7.
Cheers,
Tom
On 9 N
I'm very happy with the milestones but worried about the versioning number.
It seems it will mostly bring stuff out of deprecation vs actually bringing
in breaking features. A 2.0 to me should bring something major to the
table, possibly breaking, which would justify a big number hop. I'm still
new
Hi all,
I'm starting this discussion early because of the potential impact.
Kafka 1.0.0 was just released and the focus was on achieving the original
project vision in terms of features provided while maintaining
compatibility for the most part (i.e. we did not remove deprecated
components like t
18 matches
Mail list logo