Warm Geetings on New Year 2022
Hi Pulsar enthusiasts, With 2022 knocking on our doors, I would like to take this opportunity to express sincere gratitude to you for making what Pulsar is today and shaping what Pulsar will be tomorrow! This year, Pulsar is continued to be sought out by organizations developing messaging and event-streaming applications — from Fortune 100 companies to forward-thinking start-ups — the community is growing quickly. This community growth has contributed to several milestones, including major releases, first-ever Pulsar Summits, 400+ contributors, and more. Regarding contributions, more people have shifted to contribute more documentation, that is awesome! As we know, documentation is the entry point into Pulsar for most people, and a lack of good documentation inherently limits its reach. Next year, we will continue to help users succeed with Pulsar, empower users to be self-sufficient, and encourage them to give further feedback with quality documentation. Until now, We have various PIPs [1] and documentation issues [2] to be finished on our plates. If you’re interested in getting involved with Pulsar, feel free to join us! Looking forward to another fruitful year ahead for Pulsar. Wishing you a New Year loaded with laughter and blessed with smiles. Wishing you a New Year that you remember all your life. Happy 2022! Cheers, Yu [1] https://docs.google.com/spreadsheets/d/1U2aYjla-oGxAHkCpdBCBy_EWNwHKOvAJ-6-3IiVrJ1U/edit#gid=1283297735 [2] https://github.com/apache/pulsar/issues?q=is%3Aopen+is%3Aissue+label%3Adoc-required [image: image.png]
[DISCUSSION] PIP-131: Include message header size when check maxMessageSize of non-batch message on the client side.
https://github.com/apache/pulsar/issues/13591 Pasted below for quoting convenience. —— ## Motivation Currently, Pulsar client (Java) only checks payload size for max message size validation. Client throws TimeoutException if we produce a message with too many properties, see [1]. But the root cause is that is trigged TooLongFrameException in broker server. In this PIP, I propose to include message header size when check maxMessageSize of non-batch messages, this brings the following benefits. 1. Clients can throw InvalidMessageException immediately if properties takes too much storage space. 2. This will make the behaviour consistent with topic level max message size check in broker. 3. Strictly limit the entry size less than maxMessageSize, avoid sending message to bookkeeper failed. ## Goal Include message header size when check maxMessageSize for non-batch message on the client side. ## Implementation ``` // Add a size check in org.apache.pulsar.client.impl.ProducerImpl#processOpSendMsg if (op.msg != null // for non-batch messages only && op.getMessageHeaderAndPayloadSize() > ClientCnx.getMaxMessageSize()) { // finish send op with InvalidMessageException releaseSemaphoreForSendOp(op); op.sendComplete(new PulsarClientException(new InvalidMessageException, op.sequenceId)); } // org.apache.pulsar.client.impl.ProducerImpl.OpSendMsg#getMessageHeaderAndPayloadSize public int getMessageHeaderAndPayloadSize() { ByteBuf cmdHeader = cmd.getFirst(); cmdHeader.markReaderIndex(); int totalSize = cmdHeader.readInt(); int cmdSize = cmdHeader.readInt(); int msgHeadersAndPayloadSize = totalSize - cmdSize - 4; cmdHeader.resetReaderIndex(); return msgHeadersAndPayloadSize; } ``` ## Reject Alternatives Add a new property like "maxPropertiesSize" or "maxHeaderSize" in broker.conf and pass it to client like maxMessageSize. But the implementation is much more complex, and don't have the benefit 2 and 3 mentioned in Motivation. ## Compatibility Issue As a matter of fact, this PIP narrows down the sendable range. Previously, when maxMessageSize is 1KB, it's ok to send message with 1KB properties and 1KB payload. But with this PIP, the sending will fail with InvalidMessageException. One conservative way is to add a boolean config "includeHeaderInSizeCheck" to enable this feature. But I think it's OK to enable this directly as it's more reasonable, and I don't see good migration plan if we add a config for this. The compatibility issue is worth discussing. And any suggestions are appreciated. [1] https://github.com/apache/pulsar/issues/13560 Thanks, Haiting Jiang
Re: [DISCUSSION] PIP-132: Include message header size when check maxMessageSize of non-batch message on the client side.
Sorry, it should be PIP-132. Thanks, Haiting Jiang On 2021/12/31 12:05:54 Haiting Jiang wrote: > https://github.com/apache/pulsar/issues/13591 > > Pasted below for quoting convenience. > > —— > > ## Motivation > > Currently, Pulsar client (Java) only checks payload size for max message size > validation. > > Client throws TimeoutException if we produce a message with too many > properties, see [1]. > But the root cause is that is trigged TooLongFrameException in broker server. > > In this PIP, I propose to include message header size when check > maxMessageSize of non-batch > messages, this brings the following benefits. > 1. Clients can throw InvalidMessageException immediately if properties takes > too much storage space. > 2. This will make the behaviour consistent with topic level max message size > check in broker. > 3. Strictly limit the entry size less than maxMessageSize, avoid sending > message to bookkeeper failed. > > ## Goal > > Include message header size when check maxMessageSize for non-batch message > on the client side. > > ## Implementation > > ``` > // Add a size check in > org.apache.pulsar.client.impl.ProducerImpl#processOpSendMsg > if (op.msg != null // for non-batch messages only > && op.getMessageHeaderAndPayloadSize() > ClientCnx.getMaxMessageSize()) { > // finish send op with InvalidMessageException > releaseSemaphoreForSendOp(op); > op.sendComplete(new PulsarClientException(new InvalidMessageException, > op.sequenceId)); > } > > > // > org.apache.pulsar.client.impl.ProducerImpl.OpSendMsg#getMessageHeaderAndPayloadSize > > public int getMessageHeaderAndPayloadSize() { > ByteBuf cmdHeader = cmd.getFirst(); > cmdHeader.markReaderIndex(); > int totalSize = cmdHeader.readInt(); > int cmdSize = cmdHeader.readInt(); > int msgHeadersAndPayloadSize = totalSize - cmdSize - 4; > cmdHeader.resetReaderIndex(); > return msgHeadersAndPayloadSize; > } > ``` > > ## Reject Alternatives > Add a new property like "maxPropertiesSize" or "maxHeaderSize" in broker.conf > and pass it to > client like maxMessageSize. But the implementation is much more complex, and > don't have the > benefit 2 and 3 mentioned in Motivation. > > ## Compatibility Issue > As a matter of fact, this PIP narrows down the sendable range. Previously, > when maxMessageSize > is 1KB, it's ok to send message with 1KB properties and 1KB payload. But with > this PIP, the > sending will fail with InvalidMessageException. > > One conservative way is to add a boolean config "includeHeaderInSizeCheck" to > enable this > feature. But I think it's OK to enable this directly as it's more reasonable, > and I don't see good > migration plan if we add a config for this. > > The compatibility issue is worth discussing. And any suggestions are > appreciated. > > [1] https://github.com/apache/pulsar/issues/13560 > > Thanks, > Haiting Jiang >
Re: [DISCUSSION] PIP-130: Apply redelivery backoff policy for ack timeout
+1 for this PIP. Do we have plans for other languages clients? like go? Thanks, Haiting Jiang On 2021/12/27 13:24:52 PengHui Li wrote: > https://github.com/apache/pulsar/issues/13528 > > Pasted below for quoting convenience. > > - > > PIP 130: Apply redelivery backoff policy for ack timeout > > ## Motivation > > PIP 106 > https://github.com/apache/pulsar/wiki/PIP-106%3A-Negative-acknowledgment-backoff > introduced negative acknowledgment message redelivery backoff which allows > users to achieve > more flexible message redelivery delay time control. But the redelivery > backoff policy only > apply to the negative acknowledgment API, for users who use ack timeout to > trigger the message > redelivery, not the negative acknowledgment API, they can't use the new > features introduced by > PIP 106. > > So the proposal is to apply the message redelivery policy for the ack > timeout mechanism. > Users can specify an ack timeout redelivery backoff, for example, apply an > exponential backoff > with 10 seconds ack timeout: > > ```java > client.newConsumer() > .ackTimeout(10, TimeUnit.SECOND) > .ackTimeoutRedeliveryBackoff( > ExponentialRedeliveryBackoff.builder() > .minDelayMs(1000) > .maxDelayMs(6).build()); > .subscribe(); > ``` > > The message redelivery behavior should be: > > | Redelivery count | Redelivery delay | > | | | > | 1 | 10 + 1 seconds | > | 2 | 10 + 2 seconds | > | 3 | 10 + 4 seconds | > | 4 | 10 + 8 seconds | > | 5 | 10 + 16 seconds | > | 6 | 10 + 32 seconds | > | 7 | 10 + 60 seconds | > | 8 | 10 + 60 seconds | > > ## Goal > > Add an API to the Java Client to provide the ability to specify the ack > timeout message redelivery > backoff and the message redelivery behavior should abide by the redelivery > backoff policy. > > > ## API Changes > > 1. Change `NegativeAckRedeliveryBackoff` to `RedeliveryBackoff`, so that we > can use the > MessageRedeliveryBackoff for both negative acknowledgment API and ack > timeout message redelivery. > > 2. Change `NegativeAckRedeliveryExponentialBackoff` to > `ExponentialRedeliveryBackoff`, and add `multiplier` > for `RedeliveryExponentialBackoff` with default value 2. > > ExponentialRedeliveryBackoff.builder() > .minDelayMs(1000) > .maxDelayMs(6) > .multiplier(5) > .build() > > 3. Add `ackTimeoutRedeliveryBackoff` method for the `ConsumerBuilder`: > > ```java > client.newConsumer() > .ackTimeout(10, TimeUnit.SECOND) > .ackTimeoutRedeliveryBackoff( > ExponentialRedeliveryBackoff.builder() > .minDelayMs(1000) > .maxDelayMs(6).build()); > .subscribe(); > ``` > > ## Compatibility and migration plan > > Since the negative acknowledgment message, redelivery backoff has not been > released yet, > so we can modify the API directly. > > ## Tests plan > > - Verify the introduced `multiplier` of ExponentialRedeliveryBackoff > - Verify the ack timeout message redelivery work as expected > > Regards, > Penghui >
Re: [Bug] Other brokers failed to acquire bundle ownership after unloading a namespace bundle
Hi, Please provide more information about this issue, like pulsar version, steps to steady reproduce this, the influence of this error. Thanks, Haiting Jiang On 2021/12/31 03:41:45 zhangao wrote: > Hi, > Currently, I found a problem about bundle ownership acquire, > After I unloaded a namespace bundle, I found these error log on other brokers: > > > ``` > 2021-12-29 14:37:37.641 [metadata-store-6-1] WARN > org.apache.pulsar.broker.lookup.TopicLookupBase - Failed to lookup null for > topic persistent://public/data-channel/tet-partition-30 with error > org.apache.pulsar.broker.PulsarServerException: Failed to acquire ownership > for namespace bundle public/data-channel/0xebf3b108_0xf000 > java.util.concurrent.CompletionException: > org.apache.pulsar.broker.PulsarServerException: Failed to acquire ownership > for namespace bundle public/data-channel/0xebf3b108_0xf000 at > java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:326) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:338) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.uniRelay(CompletableFuture.java:911) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture$UniRelay.tryFire(CompletableFuture.java:899) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) > ~[?:1.8.0_102] at > org.apache.pulsar.broker.namespace.NamespaceService.lambda$searchForCandidateBroker$15(NamespaceService.java:577) > ~[org.apache.pulsar-pulsar-broker-2.9.1.jar:2.9.1] at > java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) > ~[?:1.8.0_102] at > org.apache.pulsar.metadata.coordination.impl.LockManagerImpl.lambda$acquireLock$2(LockManagerImpl.java:111) > ~[org.apache.pulsar-pulsar-metadata-2.9.1.jar:2.9.1] at > java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) > ~[?:1.8.0_102] at > org.apache.pulsar.metadata.coordination.impl.ResourceLockImpl.lambda$acquire$4(ResourceLockImpl.java:134) > ~[org.apache.pulsar-pulsar-metadata-2.9.1.jar:2.9.1] at > java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) > ~[?:1.8.0_102] at > org.apache.pulsar.metadata.impl.ZKMetadataStore.lambda$get$7(ZKMetadataStore.java:139) > ~[org.apache.pulsar-pulsar-metadata-2.9.1.jar:2.9.1] at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_102] at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_102] at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > [io.netty-netty-common-4.1.72.Final.jar:4.1.72.Final] at > java.lang.Thread.run(Thread.java:745) [?:1.8.0_102] Caused by: > org.apache.pulsar.broker.PulsarServerException: Failed to acquire ownership > for namespace bundle public/data-channel/0xebf3b108_0xf000 ... 20 > more Caused by: java.util.concurrent.CompletionException: > org.apache.pulsar.metadata.api.MetadataStoreException$LockBusyException: > Resource at /namespace/public/data-channel/0xebf3b108_0xf000 is already > locked at > java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593) > ~[?:1.8.0_102] at > java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577) > ~[?:1.8.0_102] ...
[GitHub] [pulsar-client-node] zhaoyajun2009 opened a new issue #188: When installing the Pulsar Node.js client, the operation steps need to be optimized
zhaoyajun2009 opened a new issue #188: URL: https://github.com/apache/pulsar-client-node/issues/188 **In step “Prerequisites”, in addition to the c++ library, you need to install gcc and g++ dependency, otherwise the following problems will occur in step “Install pulsar-client in your project” later:** [ec2-user@ip-172-31-0-126 ~]$ npm install pulsar-client npm ERR! code 1 npm ERR! path /home/ec2-user/node_modules/pulsar-client npm ERR! command failed npm ERR! command sh -c node-pre-gyp install --fallback-to-build npm ERR! make: 进入目录“/home/ec2-user/node_modules/pulsar-client/build” npm ERR! CC(target) Release/obj.target/nothing/../node-addon-api/nothing.o npm ERR! make: 离开目录“/home/ec2-user/node_modules/pulsar-client/build” npm ERR! Failed to execute '/home/ec2-user/node-v16.13.1-linux-x64/bin/node /home/ec2-user/node-v16.13.1-linux-x64/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/home/ec2-user/node_modules/pulsar-client/build/Release/libpulsar.node --module_name=libpulsar --module_path=/home/ec2-user/node_modules/pulsar-client/build/Release --napi_version=8 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v93' (1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [pulsar-manager] rsrini7 commented on issue #429: Bug: default user pulsar/pulsar does not have permission to create a new environment
rsrini7 commented on issue #429: URL: https://github.com/apache/pulsar-manager/issues/429#issuecomment-1003401420 Tried something like root@pulsar-server:/pulsar/bin# ./pulsar-admin namespaces grant-permission public/default --actions produce,consume --role '*' but no success. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [VOTE] PIP 131: Resolve produce chunk messages failed when topic level maxMessageSize is set
+1 Best, Hang Zike Yang 于2021年12月29日周三 15:04写道: > > +1 > > Thanks, > Zike > > On Wed, Dec 29, 2021 at 11:26 AM PengHui Li wrote: > > > > +1 > > > > Thanks, > > Penghui > > > > On Wed, Dec 29, 2021 at 10:29 AM Haiting Jiang > > wrote: > > > > > This is the voting thread for PIP-131. It will stay open for at least 48h. > > > > > > https://github.com/apache/pulsar/issues/13544 > > > > > > The discussion thread is > > > https://lists.apache.org/thread/c63d9s73j9x1m3dkqr3r38gyp8s7cwzf > > > > > > ## Motivation > > > > > > Currently, chunk messages producing fails if topic level maxMessageSize is > > > set [1]. The root cause of this issue is because chunk message is using > > > broker level maxMessageSize as chunk size. And topic level maxMessageSize > > > is always <= broker level maxMessageSize. So once it is set, the on-going > > > chunk message producing fails. > > > > > > ## Goal > > > > > > Resolve topic level maxMessageSize compatibility issue with chunking > > > messages. > > > > > > ## Implementation > > > > > > Current best solution would be just skipping topic level maxMessageSize > > > check in > > > org.apache.pulsar.broker.service.AbstractTopic#isExceedMaximumMessageSize. > > > Topic level maxMessageSize is introduced in [2], for the purpose of > > > "easier to plan resource quotas for client allocation". And IMO this > > > change > > > will not bring further complex into this. > > > > > > ## Reject Alternatives > > > > > > Add a client side topic level maxMessageSize and keep it synced with > > > broker. > > > > > > Required changes: > > > - [client] Add a new field > > > org.apache.pulsar.client.impl.ProducerBase#maxMessageSize to store this > > > client side topic level maxMessageSize. > > > - [PulsarApi.proto] Add a new field maxMessageSize in the > > > CommandProducerSuccess for the initial value of > > > ProducerBase#maxMessageSize > > > - [PulsarApi.proto] Add a new Command like > > > CommandUpdateClientPolicy{producerId, maxMessageSize} to update > > > ProducerBase#maxMessageSize when topic level maxMessageSize is updated. > > > Further more, some other data consistency issues need be handled very > > > carefully when maxMessageSize is updated. > > > This alternative is complex but can also solve other topic level > > > maxMessageSize issue [3] when batching is enabled (non-batching case is > > > solved with PR [4]). > > > > > > [1] https://github.com/apache/pulsar/issues/13360 > > > [2] https://github.com/apache/pulsar/pull/8732 > > > [3] https://github.com/apache/pulsar/issues/12958 > > > [4] https://github.com/apache/pulsar/pull/13147 > > > > > > Thanks, > > > Haiting Jiang > > > > > > > -- > Zike Yang
Re: [VOTE] PIP 131: Resolve produce chunk messages failed when topic level maxMessageSize is set
+1 Best Regards, Lan Liang On 12/29/2021 10:29,Haiting Jiang wrote: This is the voting thread for PIP-131. It will stay open for at least 48h. https://github.com/apache/pulsar/issues/13544 The discussion thread is https://lists.apache.org/thread/c63d9s73j9x1m3dkqr3r38gyp8s7cwzf ## Motivation Currently, chunk messages producing fails if topic level maxMessageSize is set [1]. The root cause of this issue is because chunk message is using broker level maxMessageSize as chunk size. And topic level maxMessageSize is always <= broker level maxMessageSize. So once it is set, the on-going chunk message producing fails. ## Goal Resolve topic level maxMessageSize compatibility issue with chunking messages. ## Implementation Current best solution would be just skipping topic level maxMessageSize check in org.apache.pulsar.broker.service.AbstractTopic#isExceedMaximumMessageSize. Topic level maxMessageSize is introduced in [2], for the purpose of "easier to plan resource quotas for client allocation". And IMO this change will not bring further complex into this. ## Reject Alternatives Add a client side topic level maxMessageSize and keep it synced with broker. Required changes: - [client] Add a new field org.apache.pulsar.client.impl.ProducerBase#maxMessageSize to store this client side topic level maxMessageSize. - [PulsarApi.proto] Add a new field maxMessageSize in the CommandProducerSuccess for the initial value of ProducerBase#maxMessageSize - [PulsarApi.proto] Add a new Command like CommandUpdateClientPolicy{producerId, maxMessageSize} to update ProducerBase#maxMessageSize when topic level maxMessageSize is updated. Further more, some other data consistency issues need be handled very carefully when maxMessageSize is updated. This alternative is complex but can also solve other topic level maxMessageSize issue [3] when batching is enabled (non-batching case is solved with PR [4]). [1] https://github.com/apache/pulsar/issues/13360 [2] https://github.com/apache/pulsar/pull/8732 [3] https://github.com/apache/pulsar/issues/12958 [4] https://github.com/apache/pulsar/pull/13147 Thanks, Haiting Jiang
Re: [DISCUSSION] PIP-128: Add new command STOP_PRODUCER and STOP_CONSUMER
I believe that you can revoke permission for some topic/consumer. Best Regards, Lan Liang On 12/29/2021 20:25,包子 wrote: Thanks reply, `terminate the topic` is useful 2021年12月29日 11:3113,PengHui Li 写道: What is the background of the requirement? Usually, we will not force to close the producer and consumer at the server-side, because we don't if the client-side can handle this case well. Or, if the topic will retire, and you don't want the clients to connect to it, you can just terminate the topic. Thanks Penghui On Wed, Dec 29, 2021 at 10:39 AM 包子 wrote: Issue: https://github.com/apache/pulsar/issues/13488 2021年12月29日 10:3812,包子 写道: ## Motivation Broker send `CLOSE_PRODUCER/CLOSE_CONSUMER` to client when delete topic, But client will be reconnect. If config `allowAutoTopicCreation=true` will trigger create topic again. https://github.com/apache/pulsar/blob/9f599c9572e5d9b1f15efa6e895e7eb29b284e57/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ConnectionHandler.java#L130-L133 ## Goal Add new commands `STOP_PRODUCER/STOP_CONSUMER`, When the client receives the command, it only closes without reconnecting. ## API Changes 1. Add command to `BaseCommand` ```java message BaseCommand { enum Type { // ... STOP_PRODUCER = 64; STOP_CONSUMER = 65 } } ```