Not yet. It will be when 0.8.2 is release.
Thanks,
Jun
On Wed, Dec 17, 2014 at 5:24 PM, Rajiv Kurian wrote:
>
> Has the mvn repo been updated too?
>
> On Wed, Dec 17, 2014 at 4:31 PM, Jun Rao wrote:
> >
> > Thanks everyone for the feedback and the discussion. The proposed changes
> > have been
Has the mvn repo been updated too?
On Wed, Dec 17, 2014 at 4:31 PM, Jun Rao wrote:
>
> Thanks everyone for the feedback and the discussion. The proposed changes
> have been checked into both 0.8.2 and trunk.
>
> Jun
>
> On Tue, Dec 16, 2014 at 10:43 PM, Joel Koshy wrote:
> >
> > Jun,
> >
> > Tha
Thanks everyone for the feedback and the discussion. The proposed changes
have been checked into both 0.8.2 and trunk.
Jun
On Tue, Dec 16, 2014 at 10:43 PM, Joel Koshy wrote:
>
> Jun,
>
> Thanks for summarizing this - it helps confirm for me that I did not
> misunderstand anything in this thread
Jun,
Thanks for summarizing this - it helps confirm for me that I did not
misunderstand anything in this thread so far; and that I disagree with
the premise that the steps in using the current byte-oriented API is
cumbersome or inflexible. It involves instantiating the K-V
serializers in code (as
Joel,
With a byte array interface, of course there is nothing that one can't do.
However, the real question is that whether we want to encourage people to
use it this way or not. Being able to flow just bytes is definitely easier
to get started. That's why many early adopters choose to do it that
Documentation is inevitable even if the serializer/deserializer is
part of the API - since the user has to set it up in the configs. So
again, you can only encourage people to use it through documentation.
The simpler byte-oriented API seems clearer to me because anyone who
needs to send (or receiv
Joel,
It's just that if the serializer/deserializer is not part of the API, you
can only encourage people to use it through documentation. However, not
everyone will read the documentation if it's not directly used in the API.
Thanks,
Jun
On Mon, Dec 15, 2014 at 2:11 AM, Joel Koshy wrote:
> (
(sorry about the late follow-up late - I'm traveling most of this
month)
I'm likely missing something obvious, but I find the following to be a
somewhat vague point that has been mentioned more than once in this
thread without a clear explanation. i.e., why is it hard to share a
serializer/deseria
Thank you Jay. I agree with the issue that you point w.r.t paired
serializers. I also think having mix serialization types is rare. To get
the current behavior, one can simply use a ByteArraySerializer. This is
best understood by talking with many customers and you seem to have done
that. I am conv
Ok, based on all the feedbacks that we have heard, I plan to do the
following.
1. Keep the generic api in KAFKA-1797.
2. Add a new constructor in Producer/Consumer that takes the key and the
value serializer instance.
3. Have KAFKA-1797 reviewed and checked into 0.8.2 and trunk.
This will make it
I agree that having the new Producer(KeySerializer,
ValueSerializer) interface would be useful.
People suggested cases where you want to mix and match serialization types.
The ByteArraySerializer is a no-op that would give the current behavior so
any odd case where you need to mix and match serial
I'm just thinking instead of binding serialization with producer, another
option is to bind serializer/deserializer with
ProducerRecord/ConsumerRecord (please see the detail proposal below.)
The arguments for this option is:
A. A single producer could send different message type
Yeah I am kind of sad about that :(. I just mentioned it to show that there
are material use cases for applications where you expose the underlying
ByteBuffer (I know we were talking about byte arrays) instead of
serializing/deserializing objects - performance is a big one.
On Tue, Dec 2, 2014 a
Rajiv,
That's probably a very special use case. Note that even in the new consumer
api w/o the generics, the client is only going to get the byte array back.
So, you won't be able to take advantage of reusing the ByteBuffer in the
underlying responses.
Thanks,
Jun
On Tue, Dec 2, 2014 at 5:26 PM
I for one use the consumer (Simple Consumer) without any deserialization. I
just take the ByteBuffer wrap it a preallocated flyweight and use it
without creating any objects. I'd ideally not have to wrap this logic in a
deserializer interface. For every one who does do this, it seems like a
very sm
> For (1), yes, but it's easier to make a config change than a code change.
> If you are using a third party library, one may not be able to make any
> code change.
Doesn't that assume that all organizations have to already share the
same underlying specific data type definition (e.g.,
UniversalAv
For (1), yes, but it's easier to make a config change than a code change.
If you are using a third party library, one may not be able to make any
code change.
For (2), it's just that if most consumers always do deserialization after
getting the raw bytes, perhaps it would be better to have these t
"It also makes it possible to do validation on the server
side or make other tools that inspect or display messages (e.g. the various
command line tools) and do this in an easily pluggable way across tools."
I agree that it's valuable to have a standard way to plugin serialization
across many tool
> The issue with a separate ser/deser library is that if it's not part of the
> client API, (1) users may not use it or (2) different users may use it in
> different ways. For example, you can imagine that two Avro implementations
> have different ways of instantiation (since it's not enforced by t
Why can't the organization package the Avro implementation with a kafka
client and distribute that library though? The risk of different users
supplying the kafka client with different serializer/deserializer
implementations still exists.
On Tue, Dec 2, 2014 at 12:11 PM, Jun Rao wrote:
> Joel, R
Yeah totally, far from preventing it, making it easy to specify/encourage a
custom serializer across your org is exactly the kind of thing I was hoping
to make work well. If there is a config that gives the serializer you can
just default this to what you want people to use as some kind of
environm
er in a consistent way, and *that* still needs to be documented and
understood.
Regards,
Thunder
-Original Message-
From: Jay Kreps [mailto:j...@confluent.io]
Sent: Tuesday, December 02, 2014 11:10 AM
To: dev@kafka.apache.org
Cc: us...@kafka.apache.org
Subject: Re: [DISCUSSION] adding
Joel, Rajiv, Thunder,
The issue with a separate ser/deser library is that if it's not part of the
client API, (1) users may not use it or (2) different users may use it in
different ways. For example, you can imagine that two Avro implementations
have different ways of instantiation (since it's no
Thanks for the follow-up Jay. I still don't quite see the issue here
but maybe I just need to process this a bit more. To me "packaging up
the best practice and plug it in" seems to be to expose a simple
low-level API and give people the option to plug in a (possibly
shared) standard serializer in
Hey Joel, you are right, we discussed this, but I think we didn't think
about it as deeply as we should have. I think our take was strongly shaped
by having a wrapper api at LinkedIn that DOES do the serialization
transparently so I think you are thinking of the producer as just an
implementation d
Re: pushing complexity of dealing with objects: we're talking about
just a call to a serialize method to convert the object to a byte
array right? Or is there more to it? (To me) that seems less
cumbersome than having to interact with parameterized types. Actually,
can you explain more clearly what
Joel,
Thanks for the feedback.
Yes, the raw bytes interface is simpler than the Generic api. However, it
just pushes the complexity of dealing with the objects to the application.
We also thought about the layered approach. However, this may confuse the
users since there is no single entry point
> makes it hard to reason about what type of data is being sent to Kafka and
> also makes it hard to share an implementation of the serializer. For
> example, to support Avro, the serialization logic could be quite involved
> since it might need to register the Avro schema in some remote registry a
The old consumer already takes a deserializer when creating streams. So you
plug in your decoder there.
Thanks,
Jun
On Tue, Nov 25, 2014 at 8:29 AM, Manikumar Reddy
wrote:
> +1 for this change.
>
> what about de-serializer class in 0.8.2? Say i am using new producer with
> Avro and old consu
Shiomi,
Sorry, at that time, I didn't realize that we would be better off with an
api change. Yes, it sucks that we have to break the api. However, if we
have to change it, it's better to do it now rather than later.
Note that if you want to just produce byte[] to Kafka, you can still do
that wit
Hey Shlomi,
I agree that we just blew this one from a timing perspective. We ideally
should have thought this through in the original api discussion. But as we
really started to think about this area we realized that the existing api
made it really hard to provide a simple way package of serializa
How will mix bag will work with Consumer side ? Entire site can not be
rolled at once so Consumer will have to deals with New and Old Serialize
Bytes ? This could be app team responsibility. Are you guys targeting
0.8.2 release, which may break customer who are already using new producer
API (be
+1 for this change.
what about de-serializer class in 0.8.2? Say i am using new producer with
Avro and old consumer combination.
then i need to give custom Decoder implementation for Avro right?.
On Tue, Nov 25, 2014 at 9:19 PM, Joe Stein wrote:
> The serializer is an expected use of the prod
The serializer is an expected use of the producer/consumer now and think we
should continue that support in the new client. As far as breaking the API
it is why we released the 0.8.2-beta to help get through just these type of
blocking issues in a way that the community at large could be involved i
+1 on this change — APIs are forever. As much as we’d love to see 0.8.2 release
ASAP, it is important to get this right.
-JW
> On Nov 24, 2014, at 5:58 PM, Jun Rao wrote:
>
> Hi, Everyone,
>
> I'd like to start a discussion on whether it makes sense to add the
> serializer api back to the new
Jun, while just a humble user, I would like to recall that it was just 6
days ago that you told me on the user list that the producer is stable when
I asked what producer to go with and if the new producer is production
stable (you can still see that email down the list).
maybe I miss something, bu
Looked at the patch. +1 from me.
On 11/24/14 8:29 PM, "Gwen Shapira" wrote:
>As one of the people who spent too much time building Avro repositories,
>+1
>on bringing serializer API back.
>
>I think it will make the new producer easier to work with.
>
>Gwen
>
>On Mon, Nov 24, 2014 at 6:13 PM, Ja
As one of the people who spent too much time building Avro repositories, +1
on bringing serializer API back.
I think it will make the new producer easier to work with.
Gwen
On Mon, Nov 24, 2014 at 6:13 PM, Jay Kreps wrote:
> This is admittedly late in the release cycle to make a change. To add
This is admittedly late in the release cycle to make a change. To add to
Jun's description the motivation was that we felt it would be better to
change that interface now rather than after the release if it needed to
change.
The motivation for wanting to make a change was the ability to really be
Hi, Everyone,
I'd like to start a discussion on whether it makes sense to add the
serializer api back to the new java producer. Currently, the new java
producer takes a byte array for both the key and the value. While this api
is simple, it pushes the serialization logic into the application. This
40 matches
Mail list logo