[GitHub] [pulsar-helm-chart] lhotari opened a new pull request #194: [CI] Improve change detection by reusing solution from apache/pulsar

2022-01-04 Thread GitBox


lhotari opened a new pull request #194:
URL: https://github.com/apache/pulsar-helm-chart/pull/194


   ### Motivation
   
   - The built-in GitHub "paths" solution will only consider that latest commit 
to the branch. This can cause checks to be skipped for some pull requests.
   
   ### Modifications
   
   - reuse parts of GitHub Actions CI workflows used in apache/pulsar 
 - change detection
 - .github/actions/tune-runner-vm/action.yml runner VM tuning
 - checkout the merge commit instead of PR head 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on a change in pull request #192: [CI] Upgrade k8s to 1.18 and also upgrade helm, kind & chart releaser versions

2022-01-04 Thread GitBox


lhotari commented on a change in pull request #192:
URL: https://github.com/apache/pulsar-helm-chart/pull/192#discussion_r777888049



##
File path: .gitignore
##
@@ -16,3 +16,6 @@ charts/**/*.lock
 
 PRIVATEKEY
 PUBLICKEY
+.vagrant/
+pulsarctl-amd64-linux.tar.gz
+pulsarctl-amd64-linux/

Review comment:
   @michaeljmarshall I think it's better to handle the possible removal of 
pulsarctl in another PR.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[DISCUSSION] PIP-121: Pulsar cluster level auto failover

2022-01-04 Thread Hang Chen
https://github.com/apache/pulsar/issues/13315

Pasted below for quoting convenience.


### Motivation
We have geo-replication to support Pulsar cluster level failover. We
can setup Pulsar cluster A as a primary cluster in data center A, and
setup Pulsar cluster B as backup cluster in data center B. Then we
configure geo-replication between cluster A and cluster B. All the
clients are connected to the Pulsar cluster by DNS. If cluster A is
down, we should switch the DNS to point the target Pulsar cluster from
cluster A to cluster B. After the clients are resolved to cluster B,
they can produce and consume messages normally. After cluster A
recovers, the administrator should switch the DNS back to cluster A.

However, the current method has two shortcomings.
1. The administrator should monitor the status of all Pulsar clusters,
and switch the DNS as soon as possible when cluster A is down. The
switch and recovery is not automatic and recovery time is controlled
by the administrator, which will put the administrator under heavy
load.
2. The Pulsar client and DNS system have a cache. When the
administrator switches the DNS from cluster A to Cluster B, it will
take some time for cache trigger timeout, which will delay client
recovery time and lead to the product/consumer message failing.

### Goal
It's better to provide an automatic cluster level failure recovery
mechanism to make pulsar cluster failover more effective. We should
support pulsar clients auto switching from cluster A to cluster B when
it detects cluster A has been down according to the configured
detecting policy and switch back to cluster A when it has recovered.
The reason why we should switch back to cluster A is that most
applications may be deployed in data center A and they have low
network cost for communicating with pulsar cluster A. If they keep
visiting pulsar cluster B, they have high network cost, and cause high
produce/consume latency.

In order to improve the DNS cache problem, we should provide an
administrator controlled switch provider for administrators to update
service URLs.

In the end, we should provide an auto service URL switch provider and
administrator controlled switch provider.

### Design
We have already provided the `ServiceUrlProvider` interface to support
different service URLs. In order to support automatic cluster level
failure auto recovery, we can provide different ServiceUrlProvider
implementations. For current requirements, we can provide
`AutoClusterFailover` and `ControlledClusterFailover`.

 AutoClusterFailover
In order to support auto switching from the primary cluster to the
secondary, we can provide a probe task, which will probe the activity
of the primary cluster and the secondary one. When it finds the
primary cluster failed more than `failoverDelayMs`, it will switch to
the secondary cluster by calling `updateServiceUrl`. After switching
to the secondary cluster, the `AutoClusterFailover` will continue to
probe the primary cluster. If the primary cluster comes back and
remains active for `switchBackDelayMs`, it will switch back to the
primary cluster.
The APIs are listed as follows.

In order to support multiple secondary clusters, use List to store
secondary cluster urls. When the primary cluster probe fails for
failoverDelayMs, it will start to probe the secondary cluster list one
by one, once it finds the active cluster, it will switch to the target
cluster. Notice: If you configured multiple clusters, you should turn
on cluster level geo-replication to ensure the topic data sync between
all primary and secondary clusters. Otherwise, it may distribute the
topic data into different clusters. And the consumers won’t get the
whole data of the topic.

In order to support different authentication configurations between
clusters, we provide the authentication relation configurations
updated with the target cluster.

```Java
public class AutoClusterFailover implements ServiceUrlProvider {

   private AutoClusterFailover(String primary, List secondary,
long failoverDelayNs, long switchBackDelayNs,
long intervalMs, Authentication
primaryAuthentication,
List
secondaryAuthentications, String primaryTlsTrustCertsFilePath,
List
secondaryTlsTrustCertsFilePaths, String primaryTlsTrustStorePath,
List
secondaryTlsTrustStorePaths, String primaryTlsTrustStorePassword,
List secondaryTlsTrustStorePasswords) {
//
}

@Override
public void initialize(PulsarClient client) {
this.pulsarClient = client;

// start to probe primary cluster active or not
executor.scheduleAtFixedRate(catchingAndLoggingThrowables(() -> {
// probe and switch
}), intervalMs, intervalMs, TimeUnit.MILLISECONDS);

}

@Override
public String getServiceUrl() {
return this.currentPulsarServiceUrl;
}

@Overrid

[GitHub] [pulsar-helm-chart] lhotari merged pull request #194: [CI] Improve change detection by reusing solution from apache/pulsar

2022-01-04 Thread GitBox


lhotari merged pull request #194:
URL: https://github.com/apache/pulsar-helm-chart/pull/194


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Lifting Kubernetes minimum version requirement for Apache Pulsar Helm Charts from k8s 1.14 to 1.18

2022-01-04 Thread Lari Hotari
Hi all,

Currently k8s 1.14 version is used in CI to verify the Helm chart changes.

k8s 1.14 became end-of-life 2019-12-11 , over 2 years ago [1].
The oldest maintained version for Kubernetes is 1.20 and it will become
end-of-life on 2022-02-28, in less than 2 months from now [2].

There's a PR to lift the minimum requirement for Apache Pulsar Helm Charts
to 1.18 so that we don't fall too far behind.
https://github.com/apache/pulsar-helm-chart/pull/192

Please review. If the PR gets approved and merged, the Kubernetes minimum
version requirement will be lifted to Kubernetes 1.18.


Best regards,

Lari


[1] https://kubernetes.io/releases/patch-releases/#non-active-branch-history
[2] https://kubernetes.io/releases/patch-releases/#1-20


[GitHub] [pulsar-helm-chart] lhotari commented on pull request #192: [CI] Upgrade k8s to 1.18 and also upgrade helm, kind & chart releaser versions

2022-01-04 Thread GitBox


lhotari commented on pull request #192:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/192#issuecomment-1004658284


   Mailing list discussion: 
https://lists.apache.org/thread/jwhb980svfm8rfbd7grswzb1dzf964ny


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] mkoertgen commented on issue #124: pulsar waiting to start: PodInitializing

2022-01-04 Thread GitBox


mkoertgen commented on issue #124:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/124#issuecomment-1004706854


   Kind of similar problem for me. Reinstalling a pulsar cluster chart=2.7.7 
with initialize=true results in brokers / proxies getting stuck waiting for 
zookeeper.
   
   I could track it down to incorrect arguments of **pulsar-init-job**. 
   Note that the init-job exited correctly
   
   
![image](https://user-images.githubusercontent.com/7235760/148048119-44b3e3e0-96b4-47e4-a589-d4e98e846e82.png)
   
   Here are the logs:
   ```
   Exception in thread "main" com.beust.jcommander.ParameterException: The 
following options are required: [-cs | --configuration-store], [-uw | 
--web-service-url], [-zk | --zookeeper], [-c | --cluster]
at com.beust.jcommander.JCommander.validateOptions(JCommander.java:388)
at com.beust.jcommander.JCommander.parse(JCommander.java:357)
at com.beust.jcommander.JCommander.parse(JCommander.java:335)
at 
org.apache.pulsar.PulsarClusterMetadataSetup.main(PulsarClusterMetadataSetup.java:146)
   Usage:  [options]
 Options:
   -ub, --broker-service-url
 Broker-service URL for new cluster
   -tb, --broker-service-url-tls
 Broker-service URL for new cluster with TLS encryption
 * -c, --cluster
 Cluster name
 * -cs, --configuration-store
 Configuration Store connection string
   --existing-bk-metadata-service-uri
 The metadata service URI of the existing BookKeeper cluster that you 
 want to use
   -h, --help
 Show this help message
 Default: false
   --initial-num-stream-storage-containers
 Num storage containers of BookKeeper stream storage
 Default: 16
   --initial-num-transaction-coordinators
 Num transaction coordinators will assigned in cluster
 Default: 16
 * -uw, --web-service-url
 Web-service URL for new cluster
   -tw, --web-service-url-tls
 Web-service URL for new cluster with TLS encryption
 * -zk, --zookeeper
 Local ZooKeeper quorum connection string
   --zookeeper-session-timeout-ms
 Local zookeeper session timeout ms
 Default: 3
   
   sh: 4: --cluster: not found
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] mkoertgen edited a comment on issue #124: pulsar waiting to start: PodInitializing

2022-01-04 Thread GitBox


mkoertgen edited a comment on issue #124:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/124#issuecomment-1004706854


   Kind of similar problem for me. Reinstalling a pulsar cluster chart=2.7.7 
with initialize=true results in brokers / proxies getting stuck waiting for 
zookeeper.
   
   
![image](https://user-images.githubusercontent.com/7235760/148048119-44b3e3e0-96b4-47e4-a589-d4e98e846e82.png)
   
   I could track it down to incorrect arguments of **pulsar-init-job**. Here 
are some logs:
   ```
   Exception in thread "main" com.beust.jcommander.ParameterException: The 
following options are required: [-cs | --configuration-store], [-uw | 
--web-service-url], [-zk | --zookeeper], [-c | --cluster]
at com.beust.jcommander.JCommander.validateOptions(JCommander.java:388)
at com.beust.jcommander.JCommander.parse(JCommander.java:357)
at com.beust.jcommander.JCommander.parse(JCommander.java:335)
at 
org.apache.pulsar.PulsarClusterMetadataSetup.main(PulsarClusterMetadataSetup.java:146)
   Usage:  [options]
 Options:
   -ub, --broker-service-url
 Broker-service URL for new cluster
   -tb, --broker-service-url-tls
 Broker-service URL for new cluster with TLS encryption
 * -c, --cluster
 Cluster name
 * -cs, --configuration-store
 Configuration Store connection string
   --existing-bk-metadata-service-uri
 The metadata service URI of the existing BookKeeper cluster that you 
 want to use
   -h, --help
 Show this help message
 Default: false
   --initial-num-stream-storage-containers
 Num storage containers of BookKeeper stream storage
 Default: 16
   --initial-num-transaction-coordinators
 Num transaction coordinators will assigned in cluster
 Default: 16
 * -uw, --web-service-url
 Web-service URL for new cluster
   -tw, --web-service-url-tls
 Web-service URL for new cluster with TLS encryption
 * -zk, --zookeeper
 Local ZooKeeper quorum connection string
   --zookeeper-session-timeout-ms
 Local zookeeper session timeout ms
 Default: 3
   
   sh: 4: --cluster: not found
   ```
   
   Note that the init-job exited correctly because there is an `|| true` that 
catches all errors.
   
![image](https://user-images.githubusercontent.com/7235760/148048472-9abfbb5f-6db8-4ace-b174-5a6ca7c0ce13.png)
   
   I guess that should be considered a problem, should it not?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] mkoertgen commented on issue #124: pulsar waiting to start: PodInitializing

2022-01-04 Thread GitBox


mkoertgen commented on issue #124:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/124#issuecomment-1004718311


   To me it looks like the problem has to do with helm-templates expanding 
empty lines without breaks (`\`) causing the shell call to fail. 
   Here is what the helm renders
   
   ```
   bin/pulsar initialize-cluster-metadata \
   
   --cluster pulsar \
   --zookeeper pulsar-zookeeper:2181 \
   --configuration-store pulsar-zookeeper:2181 \
   --web-service-url 
http://pulsar-broker.pulsar.svc.cluster.local:8080/ \
   --web-service-url-tls 
https://pulsar-broker.pulsar.svc.cluster.local:8443/ \
   --broker-service-url 
pulsar://pulsar-broker.pulsar.svc.cluster.local:6650/ \
   --broker-service-url-tls 
pulsar+ssl://pulsar-broker.pulsar.svc.cluster.local:6651/ || true;
   ```
   
   I exed into the shell of one of the running bookies to reproduce the problem 
and yes, with the empty line it gives the error message noted above.
   
   Manually calling the cluster initialization with the empty line removed 
helped in my case to get the cluster working again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Pulsar Summit Asia 2021 will be online only

2022-01-04 Thread Pulsar Summit Team
Hi Apache Pulsar Community,

Hope that you have all enjoyed a wonderful start of 2022!

We are sending this email to update the status of Pulsar Summit Asia 2021.
Considering the current situation, Pulsar Summit Asia 2021 will be *online
only* instead of hybrid (both in-person and online). The event will still
be held on *Jan 15-16, 2022*. We are looking forward to meeting you online.


Best regards,
Pulsar Summit Team

On Wed, Dec 8, 2021 at 5:50 PM Pulsar Summit Team <
organiz...@pulsar-summit.org> wrote:

> Hi Piper,
>
> Currently we have no plan for an in-person conference in Tokyo. You can
> register for Pulsar Summit Asia 2021 at Hopin[1] and join us online.
>
> For the further plan of Pulsar Summit 2022, we are looking forward to a
> discussion on this proposal if any community member can help organize the
> summit in Tokyo.
>
> [1] Hopin: https://hopin.com/events/pulsar-summit-asia-2021
>
>
> Best,
> Pulsar Summit Team
>
> On Fri, Nov 19, 2021 at 12:54 AM Aaron Williams <
> aaron.willi...@datastax.com> wrote:
>
>> Not that I know of, although there is a meetup group in Japan:
>> https://japan-pulsar-user-group.connpass.com/
>>
>> Thanks,
>> Aaron
>>
>> On Thu, Nov 18, 2021 at 2:37 AM Piper H  wrote:
>>
>>> Is there a coming offline Summit in Tokyo, JP?
>>>
>>> Thanks
>>>
>>> On Thu, Nov 18, 2021 at 4:57 PM Pulsar Summit Team <
>>> organiz...@pulsar-summit.org> wrote:
>>>
 Hi Apache Pulsar Community,

 Greetings from the Pulsar Summit team! Thank you everyone who has been
 supporting Pulsar Summit Asia 2021. However, because of an unexpected
 pandemic, we are sorry that the date of Pulsar Summit Asia 2021 has to be
 extended from *Nov 20-21, 2021* to *Jan 15-16, 2022*. To ensure a safe
 conference environment for our community members is always the priority.
 Pulsar Summit Asia 2021 will still be a hybrid event as we originally
 designed.

 In the meantime, more session submissions are welcomed! We believe that
 more Pulsar successes are foreseen during the extension, submit your story
 here before *December 15th* if you have reached further achievements
 in running Pulsar!

 Please stay tuned and stay healthy. And we will keep the updates once
 the situation changes.

 [1] *Submit your session here*:
 https://sessionize.com/pulsar-summit-asia-2021/
 
 [2] *Register for Pulsar Summit Asia 2021*:
 https://sessionize.com/pulsar-summit-asia-2021/
 


 Best Regards,
 Pulsar Summit Team

>>>


[GitHub] [pulsar-helm-chart] mkoertgen commented on issue #124: pulsar waiting to start: PodInitializing

2022-01-04 Thread GitBox


mkoertgen commented on issue #124:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/124#issuecomment-1004721707


   Strange, because the helm-template looks fine to me. I have no idea where 
this extra line is coming from
   - 
https://github.com/apache/pulsar-helm-chart/blob/master/charts/pulsar/templates/pulsar-cluster-initialize.yaml#L92


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] mkoertgen edited a comment on issue #124: pulsar waiting to start: PodInitializing

2022-01-04 Thread GitBox


mkoertgen edited a comment on issue #124:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/124#issuecomment-1004721707


   Strange, because the helm-template looks fine to me. I have no idea where 
this extra line is coming from
   - 
https://github.com/apache/pulsar-helm-chart/blob/master/charts/pulsar/templates/pulsar-cluster-initialize.yaml#L92
   Git blame tells no changes there for about the last 16 months. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] mkoertgen commented on issue #124: pulsar waiting to start: PodInitializing

2022-01-04 Thread GitBox


mkoertgen commented on issue #124:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/124#issuecomment-1004724440


   Probably, a suggestion for the future: Would it not be possible to default 
the `initialize=true`-value from [Helm's 
builtin-objects](https://helm.sh/docs/chart_template_guide/builtin_objects/) 
like `{{- if .Release.IsInstall }}` ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari opened a new pull request #195: [CI] Improve logging in CI scripts and add timeouts

2022-01-04 Thread GitBox


lhotari opened a new pull request #195:
URL: https://github.com/apache/pulsar-helm-chart/pull/195


   ### Motivation
   
   Currently it's hard to investigate CI failures since there isn't sufficient 
logging.
   
   ### Modifications
   
   - show Kubernetes events every 15 seconds
   - dump logs for all pods in the pulsar namespace every 5 minutes
   - dump logs for all pods in the pulsar namespace when timeout occurs
   - since CI scripts might be run locally, add support for "timeout" on MacOSX 
via instructing the user to install the coreutils brew package


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on pull request #190: Bump to Pulsar 2.8.2

2022-01-04 Thread GitBox


lhotari commented on pull request #190:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/190#issuecomment-1004765915


   I created a PR #195 which adds better logging to CI which would help 
investigating CI failures. I have observed the "ZK TLS Only" CI job failing, 
presumably with the known problem. However it would be nice to see the logs too 
once #195 is reviewed and merged. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[DISCUSSION] PIP-122: Change loadBalancer default loadSheddingStrategy to ThresholdShedder

2022-01-04 Thread Hang Chen
https://github.com/apache/pulsar/issues/13340

Pasted below for quoting convenience.


### Motivation
The ThresholdShedder load balance policy since Pulsar 2.6.0 by
https://github.com/apache/pulsar/pull/6772. It can resolve many load
balance issues of `OverloadShedder` and works well in many Pulsar
production clusters.

In Pulsar 2.6.0, 2.7.0, 2.8.0 and 2.9.0, Pulsar's default load balance
policy is `OverloadShedder`.

I think it's a good time for 2.10 to change default load balance
policy to `ThresholdShedder`, it will make throughput more balance
between brokers.

### Proposed Changes
In 2.10 release,for `broker.conf`, change
`loadBalancerLoadSheddingStrategy` from
`org.apache.pulsar.broker.loadbalance.impl.OverloadShedder` to
`org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`


RE: [VOTE] PIP-130: Apply redelivery backoff policy for ack timeout

2022-01-04 Thread 刘德志


+1

Thanks, 
Dezhi Liu 

[GitHub] [pulsar-helm-chart] lhotari merged pull request #195: [CI] Improve logging in CI scripts and add timeouts

2022-01-04 Thread GitBox


lhotari merged pull request #195:
URL: https://github.com/apache/pulsar-helm-chart/pull/195


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari closed pull request #178: [Performance] Remove -XX:-ResizePLAB JVM option which degrades performance on JDK11

2022-01-04 Thread GitBox


lhotari closed pull request #178:
URL: https://github.com/apache/pulsar-helm-chart/pull/178


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on pull request #178: [Performance] Remove -XX:-ResizePLAB JVM option which degrades performance on JDK11

2022-01-04 Thread GitBox


lhotari commented on pull request #178:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/178#issuecomment-1004907742


   this change is already combined in #190 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSSION] PIP-122: Change loadBalancer default loadSheddingStrategy to ThresholdShedder

2022-01-04 Thread Michael Marshall
Hi Hang Chen,

I support changing the default for 2.10.

However, as far as I can tell, there are unit tests for the
`OverloadShedder` class but not for the `ThresholdShedder` class. I
think we should add unit tests before we change the default.

Regarding integration tests, I assume that we implicitly test the
default LoadShedder, but I haven't checked myself.

Thanks,
Michael

On Tue, Jan 4, 2022 at 8:24 AM Hang Chen  wrote:
>
> https://github.com/apache/pulsar/issues/13340
>
> Pasted below for quoting convenience.
>
> 
> ### Motivation
> The ThresholdShedder load balance policy since Pulsar 2.6.0 by
> https://github.com/apache/pulsar/pull/6772. It can resolve many load
> balance issues of `OverloadShedder` and works well in many Pulsar
> production clusters.
>
> In Pulsar 2.6.0, 2.7.0, 2.8.0 and 2.9.0, Pulsar's default load balance
> policy is `OverloadShedder`.
>
> I think it's a good time for 2.10 to change default load balance
> policy to `ThresholdShedder`, it will make throughput more balance
> between brokers.
>
> ### Proposed Changes
> In 2.10 release,for `broker.conf`, change
> `loadBalancerLoadSheddingStrategy` from
> `org.apache.pulsar.broker.loadbalance.impl.OverloadShedder` to
> `org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`


[GitHub] [pulsar-helm-chart] michaeljmarshall merged pull request #166: Workaround kustomize bug in pulsar cluster init

2022-01-04 Thread GitBox


michaeljmarshall merged pull request #166:
URL: https://github.com/apache/pulsar-helm-chart/pull/166


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] michaeljmarshall commented on pull request #194: [CI] Improve change detection by reusing solution from apache/pulsar

2022-01-04 Thread GitBox


michaeljmarshall commented on pull request #194:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/194#issuecomment-1005035759


   +1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on issue #134: Change to PULSAR_GC in values.yaml -XX args for Pulsar 2.8.0 needed

2022-01-04 Thread GitBox


lhotari commented on issue #134:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/134#issuecomment-1005048226


   This is handled as part of #190


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on issue #124: pulsar waiting to start: PodInitializing

2022-01-04 Thread GitBox


lhotari commented on issue #124:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/124#issuecomment-1005049161


   > To me it looks like the problem has to do with helm-templates expanding 
empty lines without breaks (`\`) causing the shell call to fail. 
   > Here is what the helm renders
   > 
   > ```
   > bin/pulsar initialize-cluster-metadata \
   > 
   > --cluster pulsar \
   > --zookeeper pulsar-zookeeper:2181 \
   > --configuration-store pulsar-zookeeper:2181 \
   > --web-service-url 
http://pulsar-broker.pulsar.svc.cluster.local:8080/ \
   > --web-service-url-tls 
https://pulsar-broker.pulsar.svc.cluster.local:8443/ \
   > --broker-service-url 
pulsar://pulsar-broker.pulsar.svc.cluster.local:6650/ \
   > --broker-service-url-tls 
pulsar+ssl://pulsar-broker.pulsar.svc.cluster.local:6651/ || true;
   > ```
   > 
   > I exed into the shell of one of the running bookies to reproduce the 
problem and yes, with the empty line it gives the error message noted above.
   > 
   > Manually calling the cluster initialization with the empty line removed 
helped in my case to get the cluster working again.
   
   This issue seems to be resolved by #166


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on issue #114: Use hooks instead of initialize=true

2022-01-04 Thread GitBox


lhotari commented on issue #114:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/114#issuecomment-1005052105


   Another solution proposed by @mkoertgen  
https://github.com/apache/pulsar-helm-chart/issues/124#issuecomment-1004724440


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] frankjkelly commented on pull request #181: [Istio] Provide ability to kill istio proxy in sidecar when Init Job container has completed

2022-01-04 Thread GitBox


frankjkelly commented on pull request #181:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/181#issuecomment-1005168159


   Helm chart 2.7.7 was already released. See
   https://github.com/apache/pulsar-helm-chart/releases/tag/pulsar-2.7.7
   and
   https://pulsar.apache.org/charts/index.yaml


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] ckdarby commented on pull request #190: Bump to Pulsar 2.8.2

2022-01-04 Thread GitBox


ckdarby commented on pull request #190:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/190#issuecomment-1005227662


   > > Also, why should we ship 2.8 as default instead of 2.9.1?
   > 
   > I guess it could be about doing one step at a time (2.7.x -> 2.8.x, then 
2.8.x->2.9.x) and also about stability.
   
   It would also be community friendly to do 2.8.X as a helm release version 
and *then* 2.9.X as it allows the community to run 2.8 from a helm chart and 
not need to override tags.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[OUTREACH] Apache Pulsar is #5 in commits for all ASF Projects!

2022-01-04 Thread Aaron Williams
Hello Apache Pulsar Neighbors,

In our email this morning we had some exciting news, the ASF has released
their "Apache by the digits" blog post and Apache Pulsar is the fifth most
popular project by commits!

Pretty amazing, eh?  Check out our blog post
,
the original article
, the
YouTube  video (cued to
the moment where they show Pulsar =) ),and tweet
.  Please
help spread the word of this amazing achievement!

Thanks,
Aaron Williams
Resident of the Apache Pulsar Neighborhood


Re: [VOTE] PIP-130: Apply redelivery backoff policy for ack timeout

2022-01-04 Thread Enrico Olivelli
+1 (binding)

Enrico

Il Mar 4 Gen 2022, 16:19 刘德志  ha scritto:

>
>
> +1
>
> Thanks,
> Dezhi Liu


Re: [DISCUSSION] PIP-122: Change loadBalancer default loadSheddingStrategy to ThresholdShedder

2022-01-04 Thread Hang Chen
Hi Michael,
Thanks for you review, it will push a PR to add a test for
`ThresholdShedder `.

Best,
Hang

Michael Marshall  于2022年1月5日周三 00:55写道:
>
> Hi Hang Chen,
>
> I support changing the default for 2.10.
>
> However, as far as I can tell, there are unit tests for the
> `OverloadShedder` class but not for the `ThresholdShedder` class. I
> think we should add unit tests before we change the default.
>
> Regarding integration tests, I assume that we implicitly test the
> default LoadShedder, but I haven't checked myself.
>
> Thanks,
> Michael
>
> On Tue, Jan 4, 2022 at 8:24 AM Hang Chen  wrote:
> >
> > https://github.com/apache/pulsar/issues/13340
> >
> > Pasted below for quoting convenience.
> >
> > 
> > ### Motivation
> > The ThresholdShedder load balance policy since Pulsar 2.6.0 by
> > https://github.com/apache/pulsar/pull/6772. It can resolve many load
> > balance issues of `OverloadShedder` and works well in many Pulsar
> > production clusters.
> >
> > In Pulsar 2.6.0, 2.7.0, 2.8.0 and 2.9.0, Pulsar's default load balance
> > policy is `OverloadShedder`.
> >
> > I think it's a good time for 2.10 to change default load balance
> > policy to `ThresholdShedder`, it will make throughput more balance
> > between brokers.
> >
> > ### Proposed Changes
> > In 2.10 release,for `broker.conf`, change
> > `loadBalancerLoadSheddingStrategy` from
> > `org.apache.pulsar.broker.loadbalance.impl.OverloadShedder` to
> > `org.apache.pulsar.broker.loadbalance.impl.ThresholdShedder`


Re: [VOTE] PIP-130: Apply redelivery backoff policy for ack timeout

2022-01-04 Thread Hang Chen
+1 (binding)

Best,
Hang

Enrico Olivelli  于2022年1月5日周三 06:52写道:
>
> +1 (binding)
>
> Enrico
>
> Il Mar 4 Gen 2022, 16:19 刘德志  ha scritto:
>
> >
> >
> > +1
> >
> > Thanks,
> > Dezhi Liu


Re: [DISCUSSION] PIP-121: Pulsar cluster level auto failover

2022-01-04 Thread PengHui Li
+1

Penghui

On Tue, Jan 4, 2022 at 4:51 PM Hang Chen  wrote:

> https://github.com/apache/pulsar/issues/13315
>
> Pasted below for quoting convenience.
>
> 
> ### Motivation
> We have geo-replication to support Pulsar cluster level failover. We
> can setup Pulsar cluster A as a primary cluster in data center A, and
> setup Pulsar cluster B as backup cluster in data center B. Then we
> configure geo-replication between cluster A and cluster B. All the
> clients are connected to the Pulsar cluster by DNS. If cluster A is
> down, we should switch the DNS to point the target Pulsar cluster from
> cluster A to cluster B. After the clients are resolved to cluster B,
> they can produce and consume messages normally. After cluster A
> recovers, the administrator should switch the DNS back to cluster A.
>
> However, the current method has two shortcomings.
> 1. The administrator should monitor the status of all Pulsar clusters,
> and switch the DNS as soon as possible when cluster A is down. The
> switch and recovery is not automatic and recovery time is controlled
> by the administrator, which will put the administrator under heavy
> load.
> 2. The Pulsar client and DNS system have a cache. When the
> administrator switches the DNS from cluster A to Cluster B, it will
> take some time for cache trigger timeout, which will delay client
> recovery time and lead to the product/consumer message failing.
>
> ### Goal
> It's better to provide an automatic cluster level failure recovery
> mechanism to make pulsar cluster failover more effective. We should
> support pulsar clients auto switching from cluster A to cluster B when
> it detects cluster A has been down according to the configured
> detecting policy and switch back to cluster A when it has recovered.
> The reason why we should switch back to cluster A is that most
> applications may be deployed in data center A and they have low
> network cost for communicating with pulsar cluster A. If they keep
> visiting pulsar cluster B, they have high network cost, and cause high
> produce/consume latency.
>
> In order to improve the DNS cache problem, we should provide an
> administrator controlled switch provider for administrators to update
> service URLs.
>
> In the end, we should provide an auto service URL switch provider and
> administrator controlled switch provider.
>
> ### Design
> We have already provided the `ServiceUrlProvider` interface to support
> different service URLs. In order to support automatic cluster level
> failure auto recovery, we can provide different ServiceUrlProvider
> implementations. For current requirements, we can provide
> `AutoClusterFailover` and `ControlledClusterFailover`.
>
>  AutoClusterFailover
> In order to support auto switching from the primary cluster to the
> secondary, we can provide a probe task, which will probe the activity
> of the primary cluster and the secondary one. When it finds the
> primary cluster failed more than `failoverDelayMs`, it will switch to
> the secondary cluster by calling `updateServiceUrl`. After switching
> to the secondary cluster, the `AutoClusterFailover` will continue to
> probe the primary cluster. If the primary cluster comes back and
> remains active for `switchBackDelayMs`, it will switch back to the
> primary cluster.
> The APIs are listed as follows.
>
> In order to support multiple secondary clusters, use List to store
> secondary cluster urls. When the primary cluster probe fails for
> failoverDelayMs, it will start to probe the secondary cluster list one
> by one, once it finds the active cluster, it will switch to the target
> cluster. Notice: If you configured multiple clusters, you should turn
> on cluster level geo-replication to ensure the topic data sync between
> all primary and secondary clusters. Otherwise, it may distribute the
> topic data into different clusters. And the consumers won’t get the
> whole data of the topic.
>
> In order to support different authentication configurations between
> clusters, we provide the authentication relation configurations
> updated with the target cluster.
>
> ```Java
> public class AutoClusterFailover implements ServiceUrlProvider {
>
>private AutoClusterFailover(String primary, List secondary,
> long failoverDelayNs, long switchBackDelayNs,
> long intervalMs, Authentication
> primaryAuthentication,
> List
> secondaryAuthentications, String primaryTlsTrustCertsFilePath,
> List
> secondaryTlsTrustCertsFilePaths, String primaryTlsTrustStorePath,
> List
> secondaryTlsTrustStorePaths, String primaryTlsTrustStorePassword,
> List
> secondaryTlsTrustStorePasswords) {
> //
> }
>
> @Override
> public void initialize(PulsarClient client) {
> this.pulsarClient = client;
>
> // start to probe primary cluster active or not
> executor.scheduleAtFixedR

退订

2022-01-04 Thread liber xue
退订


Re: [DISCUSSION] PIP-124: Create init subscription before sending message to DLQ

2022-01-04 Thread PengHui Li
Thanks for the great comments, Michael.

Let me try to clarify some context about the issue that users encountered
and the improvement that the proposal wants to Introduce.

> Before we get further into the implementation, I'd like to discuss
whether the current behavior is the expected behavior, as this is
the key motivation for this feature.

The DLQ can generate dynamically and users might have short
data retention for a namespace by time or by size. But the messages
in the DLQ usually compensate afterward, and we should allow users
to keep the data in the DLQ only if they want to delete them manually.

The DLQ is always for a subscriber, so a subscriber can use a init name
to achieve the purpose of not being cleaned up from the DLQ.

So the key point for this proposal is to keep data in the lazy created DLQ
topic until users wants to delete them manually.

> I think the DLQ's current behavior is the expected behavior because
the DLQ is only a topic and topics lose messages unless they have a
subscription or a retention policy.

Yes, essentially, the DLQ is only a topic, no other specific behaviors.
But the issue that the proposal wants to resolve is not to introduce a
specific
behavior for the DLQ topic or something. It is just from the perspective of
the DLQ use case,
Convenient for users to keep data in DLQ.

Without this option, we are not easy to support setting a subscription or
data retention
policy for a lazy created DLQ topic.

> I admit that it is not necessarily a nice default behavior to
potentially lose messages, but this is the design for all topics.
Based on the current design, an admin can create a retention policy
for the topic or namespace. Then, consumers of the
topic have the duration of the retention policy to discover the topic
and create a subscription before messages are lost. Is there a reason
this solution doesn't work for the DLQ topic?

The difference here is when the subscriber subscribes to the topic.
For a normal topic, the expected behavior is the subscriber able to read all
messages of the topic. It can start consuming for the earliest or latest or
any other
valid positions. But for the DLQ, contains part of the original data for a
subscription.
Users always don't expect to miss some head messages in the DLQ. Otherwise,
You will get 1,2,3 first, and 4,5 to DLQ and continue to receive 6,7, but
4,5 might
removed by pulsar automatically by Pulsar.

The current solution does not work well for DLQ topic is users not easy to
set a different
data retention policy or create a new subscription for a lazy created DLQ
topic.

> As an aside, I wonder if topic discoverability is part of the problem
here. It would be extremely valuable to get notifications any
time a topic is created. That would allow users to move away from
polling for current topic names towards a more reactive design.

The notification is a good idea, for this case, the notification will have
some drawbacks:

   1. The delayed notification might not allow us to achieve the purpose
   2. The complexity will increase, auth for the notifications, users need
   to handle the events

But the notifications can help in lots of parts such as improving
observability, etc.

Regards,
Penghui

On Tue, Jan 4, 2022 at 2:41 PM Michael Marshall 
wrote:

> Before we get further into the implementation, I'd like to discuss
> whether the current behavior is the expected behavior, as this is
> the key motivation for this feature.
>
> I think the DLQ's current behavior is the expected behavior because
> the DLQ is only a topic and topics lose messages unless they have a
> subscription or a retention policy.
>
> I admit that it is not necessarily a nice default behavior to
> potentially lose messages, but this is the design for all topics.
> Based on the current design, an admin can create a retention policy
> for the topic or namespace. Then, consumers of the
> topic have the duration of the retention policy to discover the topic
> and create a subscription before messages are lost. Is there a reason
> this solution doesn't work for the DLQ topic?
>
> Perhaps the disconnect here is that users of the DLQ feature do not
> view the DLQ as only a Pulsar topic. I look forward to your thoughts.
>
> As an aside, I wonder if topic discoverability is part of the problem
> here. It would be extremely valuable to get notifications any
> time a topic is created. That would allow users to move away from
> polling for current topic names towards a more reactive design.
>
> Thanks,
> Michael
>
>
> On Tue, Dec 28, 2021 at 7:59 PM Zike Yang
>  wrote:
> >
> > > Oh, that's a very interesting point. I think it'd be easy to add that
> > > as "internal" feature, though I'm a bit puzzled on how to add that to
> > > the producer API
> >
> > I think we can add a field `String initialSubscriptionName` to the
> > Producer Configuration. And add a new field `optional string
> > initial_subscription_name` to the `CommnadProducer`.
> > When the Broker handles the C

[GitHub] [pulsar-helm-chart] lhotari closed pull request #161: Bump Pulsar version to 2.8.1

2022-01-04 Thread GitBox


lhotari closed pull request #161:
URL: https://github.com/apache/pulsar-helm-chart/pull/161


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on pull request #161: Bump Pulsar version to 2.8.1

2022-01-04 Thread GitBox


lhotari commented on pull request #161:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/161#issuecomment-1005446823


   superseded by #190


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSSION] PIP-132: Include message header size when check maxMessageSize of non-batch message on the client side.

2022-01-04 Thread Haiting Jiang
Hi mattison,

Yes, this is an alternative way to solve the case of properties is too large. 

But I think this approach is more complex in coding and introduces new concept 
and 
configs, and I don't see the benefit of limiting the header size and payload 
size separately.

Thanks, 
Haiting Jiang

On 2022/01/04 04:14:33 mattison chao wrote:
> Hi, @Jason918 
> I think we can deprecate maxMessageSize to maxPayloadSize, and add 
> maxHeaderSize to limit header size.
> after then, we can use maxPayloadSize + maxHeaderSize to get maxMessageSize 
> at internal.
> 
> What do you think about it?
> 
> > On Dec 31, 2021, at 8:05 PM, Haiting Jiang  wrote:
> > 
> > https://github.com/apache/pulsar/issues/13591
> > 
> > Pasted below for quoting convenience.
> > 
> > ——
> > 
> > ## Motivation
> > 
> > Currently, Pulsar client (Java) only checks payload size for max message 
> > size validation.
> > 
> > Client throws TimeoutException if we produce a message with too many 
> > properties, see [1].
> > But the root cause is that is trigged TooLongFrameException in broker 
> > server.
> > 
> > In this PIP, I propose to include message header size when check 
> > maxMessageSize of non-batch
> > messages, this brings the following benefits.
> > 1. Clients can throw InvalidMessageException immediately if properties 
> > takes too much storage space.
> > 2. This will make the behaviour consistent with topic level max message 
> > size check in broker.
> > 3. Strictly limit the entry size less than maxMessageSize, avoid sending 
> > message to bookkeeper failed.
> > 
> > ## Goal
> > 
> > Include message header size when check maxMessageSize for non-batch message 
> > on the client side.
> > 
> > ## Implementation
> > 
> > ```
> > // Add a size check in 
> > org.apache.pulsar.client.impl.ProducerImpl#processOpSendMsg
> > if (op.msg != null // for non-batch messages only
> >&& op.getMessageHeaderAndPayloadSize() > ClientCnx.getMaxMessageSize()) {
> >// finish send op with InvalidMessageException
> >releaseSemaphoreForSendOp(op);
> >op.sendComplete(new PulsarClientException(new InvalidMessageException, 
> > op.sequenceId));
> > }
> > 
> > 
> > // 
> > org.apache.pulsar.client.impl.ProducerImpl.OpSendMsg#getMessageHeaderAndPayloadSize
> > 
> > public int getMessageHeaderAndPayloadSize() {
> >ByteBuf cmdHeader = cmd.getFirst();
> >cmdHeader.markReaderIndex();
> >int totalSize = cmdHeader.readInt();
> >int cmdSize = cmdHeader.readInt();
> >int msgHeadersAndPayloadSize = totalSize - cmdSize - 4;
> >cmdHeader.resetReaderIndex();
> >return msgHeadersAndPayloadSize;
> > }
> > ```
> > 
> > ## Reject Alternatives
> > Add a new property like "maxPropertiesSize" or "maxHeaderSize" in 
> > broker.conf and pass it to 
> > client like maxMessageSize. But the implementation is much more complex, 
> > and don't have the 
> > benefit 2 and 3 mentioned in Motivation.
> > 
> > ## Compatibility Issue
> > As a matter of fact, this PIP narrows down the sendable range. Previously, 
> > when maxMessageSize
> > is 1KB, it's ok to send message with 1KB properties and 1KB payload. But 
> > with this PIP, the 
> > sending will fail with InvalidMessageException.
> > 
> > One conservative way is to add a boolean config "includeHeaderInSizeCheck" 
> > to enable this 
> > feature. But I think it's OK to enable this directly as it's more 
> > reasonable, and I don't see good 
> > migration plan if we add a config for this.
> > 
> > The compatibility issue is worth discussing. And any suggestions are 
> > appreciated.
> > 
> > [1] https://github.com/apache/pulsar/issues/13560
> > 
> > Thanks,
> > Haiting Jiang
> 
> 


[GitHub] [pulsar-helm-chart] lhotari commented on a change in pull request #138: automate initialize

2022-01-04 Thread GitBox


lhotari commented on a change in pull request #138:
URL: https://github.com/apache/pulsar-helm-chart/pull/138#discussion_r778603221



##
File path: charts/pulsar/templates/bookkeeper-cluster-initialize.yaml
##
@@ -16,7 +16,7 @@
 # specific language governing permissions and limitations
 # under the License.
 #
-{{- if .Values.initialize }}
+{{- if .Release.IsInstall }}

Review comment:
   I'd suggest keeping backwards compatibility. 
   ```suggestion
   {{- if or .Release.IsInstall .Values.initialize }}
   ```
   The benefit of this is that the metadata initialization job can be created 
on demand when needed. I have been using this in test environments. It's 
possible to clear the persistent state of the cluster by scaling stateful sets 
to 0, deleting PVCs and doing a deployment with `--set initialize=true`. 

##
File path: charts/pulsar/templates/pulsar-cluster-initialize.yaml
##
@@ -17,7 +17,7 @@
 # under the License.
 #
 
-{{- if .Values.initialize }}
+{{- if .Release.IsInstall }}

Review comment:
   ```suggestion
   {{- if or .Release.IsInstall .Values.initialize }}
   ```
   

##
File path: charts/pulsar/values.yaml
##
@@ -34,9 +34,6 @@ clusterDomain: cluster.local
 ### Global Settings
 ###
 
-## Set to true on install

Review comment:
   please revert this change to keep backwards compatibility.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on issue #114: Use hooks instead of initialize=true

2022-01-04 Thread GitBox


lhotari commented on issue #114:
URL: 
https://github.com/apache/pulsar-helm-chart/issues/114#issuecomment-1005452447


   There's already a PR #138 for resolving this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari commented on pull request #130: Bump pulsar 2.8.0

2022-01-04 Thread GitBox


lhotari commented on pull request #130:
URL: 
https://github.com/apache/pulsar-helm-chart/pull/130#issuecomment-1005454379


   superseded by #190


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [pulsar-helm-chart] lhotari closed pull request #130: Bump pulsar 2.8.0

2022-01-04 Thread GitBox


lhotari closed pull request #130:
URL: https://github.com/apache/pulsar-helm-chart/pull/130


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[OUTREACH] Jan '22 Edition of 'Happenings in the Neighborhood' is out now

2022-01-04 Thread Aaron Williams
Hello Apache Pulsar Neighbors,

Did you know that the ASF just released its "Apache by the digits" blog
post and Apache Pulsar is the fifth most popular project by commits?  In
this issue

of Happenings, we talk about our ranking
,
some more end of the year stats, Log4j updates, a new PMC member, and lots
of talks. Plus our normal features of a Stack Overflow question and some
monthly community stats.

If you have anything that you think your neighbors would find interesting,
we have created #blogs-articles and #event-decks channels on the Apache
Pulsar slack Workspace to capture them.

Thank you,
Aaron Williams
Resident of the Apache Pulsar Neighborhood


[GitHub] [pulsar-helm-chart] lhotari commented on a change in pull request #130: Bump pulsar 2.8.0

2022-01-04 Thread GitBox


lhotari commented on a change in pull request #130:
URL: https://github.com/apache/pulsar-helm-chart/pull/130#discussion_r778610093



##
File path: charts/pulsar/values.yaml
##
@@ -278,7 +278,7 @@ zookeeper:
   replicaCount: 3
   updateStrategy:
 type: RollingUpdate
-  podManagementPolicy: OrderedReady
+  podManagementPolicy: Parallel

Review comment:
   @codelipenghui Do we need this change for #190 ? Can you remember the 
reason for this change? /cc @michaeljmarshall @315157973 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org