Using explicit release of the message resources, opens up many different ways to optimize the consumption path.
Pooling the direct memory when consuming with ByteBuffer schema is just one of them, but definitely not the only one. For example we could also later on: * Pool instances of MessageImpl (and all the internals) * When using Schema<byte[]>, the byte array could also be pooled, given that we also expose the effective length of the array. * More complex schemas could be deserializing directly from the Netty ByteBuf internally * Schemas could also be allowed to always reuse the deserialized objects -- Matteo Merli <matteo.me...@gmail.com> On Tue, Mar 30, 2021 at 12:52 AM Rajan Dhabalia <dhabalia...@gmail.com> wrote: > > >> > 1. Does the consumer returns buffer of the batch message? > 2. If > returns a batch message buffer, how do we handle the flow permits? It’s > better to clarify in the proposal > > It will not change the semantics of the batch-message. Batch-message gets > split into individual messages at the consumer side and this feature will > make sure that payload of individual messages can be backed by a pooled > buffer. So, it doesn't affect the batch message processing path. > > >> users don’t need to specify the schema when using the RawConsumer since > we might occurs schema incompatible issue when using Schema.BYTEBUFFER. > > ByteBuffer/ByteBuf provides an easy way to access data from the > direct-memory. So, ByteBuffer is a suitable return type and RawConsumer > doesn't fit into that requirement. Also, it would be better if users can > use this feature with the same consumer api and the same access pattern. > Therefore, consumer with ByteBuffer-schema will be a more preferable api > for the users. > > > >> doc does not tell much about how we have to change the Schema API in > order to pass the Consumer configuration to the Schema implementation. > > ByteBuffer is already supported schema type in the pulsar-client and this > feature doesn't require any changes in the Schema API. > As it's shown in application-usage > <https://github.com/apache/pulsar/wiki/PIP-83-:-Pulsar-client:-Message-consumption-with-pooled-buffer#application-usage> > section, users can enable this feature by passing the "poolMessages" flag > and existing ByteBuffer schema, and Consumer will create a message with a > payload backed by pooled buffer which can be accessed by the ByteBuffer. > Consumer<*ByteBuffer*> consumer = client.newConsumer(Schema.*BYTEBUFFER*). > *poolMessages* > (true).subscriptionName(subscriptionName)topic(topicName).subscribe(); > Message<*ByteBuffer*> msg = consumer.receive(); > > Thanks, > Rajan > > On Mon, Mar 29, 2021 at 11:37 PM Enrico Olivelli <eolive...@gmail.com> > wrote: > > > Rajan (and Matteo), > > very interesting feature. > > > > How are we going to implement this feature ? > > it looks like it is bound to the Schema.BYTEBUFFER implementation, the > > doc does not tell much about how we have to change the Schema API in > > order to pass the Consumer configuration to the Schema implementation. > > > > Enrico > > > > Il giorno mar 30 mar 2021 alle ore 06:30 PengHui Li > > <codelipeng...@gmail.com> ha scritto: > > > > > > Nice feature, I just have some questions want to confirm. > > > > > > 1. Does the consumer returns buffer of the batch message? > > > 2. If returns a batch message buffer, how do we handle the flow permits? > > It’s better to clarify in the proposal > > > > > > About the API, my feeling is a RawConsumer is more suitable here, users > > don’t need to specify the schema when using the RawConsumer since we might > > occurs schema incompatible issue when using Schema.BYTEBUFFER. So > > RawConsumer is more clear for this purpose? > > > > > > Thanks, > > > Penghui > > > On Mar 30, 2021, 10:48 AM +0800, Rajan Dhabalia <rdhaba...@apache.org>, > > wrote: > > > > Hello, > > > > We have created a PIP-83 for Pulsar client: Message consumption with > > pooled > > > > buffer. > > > > > > > > > > https://github.com/apache/pulsar/wiki/PIP-83-:-Pulsar-client:-Message-consumption-with-pooled-buffer > > > > > > > > > > > > Thanks, > > > > Rajan > >