[GitHub] [pulsar-client-node] Matt-Esch commented on issue #191: Segault with pulsar 2.9.1 and node 16.13.2

2022-02-24 Thread GitBox


Matt-Esch commented on issue #191:
URL: 
https://github.com/apache/pulsar-client-node/issues/191#issuecomment-1049731870


   It's quite trivial to reproduce if you queue up some operations on a 
producer and close the client immediately afterwards. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] Pulsar Release 2.10.0 Candidate 1

2022-02-24 Thread PengHui Li
Hi all,

I have cherry-picked the fixes to branch-2.10 and will roll out the new RC
tomorrow.
If you find any potential breaking changes, please share the information
there.

Best,
Penghui

On Wed, Feb 23, 2022 at 1:56 PM PengHui Li  wrote:

> Thanks Dave,
>
> Yes, this is also what I want to ask, I have checked all the current
> opened PRs and the current merged PRs
> Looks no other related breaking change fixes. I think we'd better keep the
> VOTE open for 2 days?
>
> Thanks,
> Penghui
>
> On Wed, Feb 23, 2022 at 1:03 PM Dave Fisher  wrote:
>
>> Hi -
>>
>> Has anyone else found another breaking issue?
>>
>> Just asking to save Peng Hui time if there is another fix for RC2,
>>
>> All the best,
>> Dave
>>
>> Sent from my iPhone
>>
>> > On Feb 22, 2022, at 8:31 PM, PengHui Li  wrote:
>> >
>> > Hi all,
>> >
>> > This PR https://github.com/apache/pulsar/pull/14410 fixes a breaking
>> change
>> > in 2.10.0,
>> > without this fix, if users enabled the debug level log and only upgrade
>> the
>> > broker but not the clients,
>> > we will get exception.
>> >
>> > I will cherry-pick this one to branch-2.10 and rollout a new RC for
>> 2.10.0.
>> >
>> > And https://github.com/apache/pulsar/pull/14409 also fixes a breaking
>> > change which discussed
>> > under the 2.9.3 release thread, it also affects the 2.10.0.
>> >
>> > Thanks,
>> > Penghui
>> >
>> >> On Tue, Feb 22, 2022 at 5:14 PM Andras Beni
>> >>  wrote:
>> >>
>> >> +1 (non binding)
>> >>
>> >> Checks done:
>> >>
>> >> - Validated checksums and signatures
>> >>
>> >> - Compiled from source w/ JDK11
>> >>
>> >> - Ran Pulsar standalone and produced-consumed from CLI and validated
>> Java
>> >> functions
>> >>
>> >> Andras
>> >>
>> >>> On Fri, Feb 18, 2022 at 4:08 PM PengHui Li 
>> wrote:
>> >>>
>> >>> This is the first release candidate for Apache Pulsar, version 2.10.0.
>> >>>
>> >>> It fixes the following issues:
>> >>>
>> >>>
>> >>
>> https://github.com/apache/pulsar/pulls?q=is%3Apr+milestone%3A2.10.0+is%3Amerged+-label%3Arelease%2F2.9.1+-label%3Arelease%2F2.9.2
>> >>>
>> >>> *** Please download, test and vote on this release. This vote will
>> stay
>> >>> open
>> >>> for at least 72 hours ***
>> >>>
>> >>> Note that we are voting upon the source (tag), binaries are provided
>> for
>> >>> convenience.
>> >>>
>> >>> Source and binary files:
>> >>>
>> https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.0-candidate-1/
>> >>>
>> >>> SHA-512 checksums:
>> >>>
>> >>>
>> >>>
>> >>
>> 3bd99e0d48c0e7df247b558aa0e0761f4652264948944b4cacbbacf6ee4052a58a73c0869b660359972f91979fd1bef46b0a980f02a8b8cfb193feb076217606
>> >>> apache-pulsar-2.10.0-bin.tar.gz
>> >>>
>> >>>
>> >>
>> cf3a1c5aa25bc0c8264e8dc4ef53106503c7448aafa9a779e143b07a9426cf58579289930f750748f85910312194b6dc5bd65cded5b0df7b40ae0aca174023ef
>> >>> apache-pulsar-2.10.0-src.tar.gz
>> >>>
>> >>> Maven staging repo:
>> >>>
>> https://repository.apache.org/content/repositories/orgapachepulsar-1142/
>> >>>
>> >>> The tag to be voted upon:
>> >>> v2.10.0-candidate-1 (c58e6e8b33a487b6d9dd10410b7620d33ecb994f)
>> >>> https://github.com/apache/pulsar/releases/tag/v2.10.0-candidate-1
>> >>>
>> >>> Pulsar's KEYS file containing PGP keys we use to sign the release:
>> >>> https://dist.apache.org/repos/dist/dev/pulsar/KEYS
>> >>>
>> >>> Docker images:
>> >>>
>> >>> pulsar:2.10.0 [1]
>> >>> pulsar-all:2.10.0 [2]
>> >>>
>> >>> Please download the source package, and follow the Release Candidate
>> >>> Validation[3]
>> >>> to validate the release
>> >>>
>> >>> [1]
>> >>>
>> >>>
>> >>
>> https://hub.docker.com/layers/193225180/lph890127/pulsar/2.10.0/images/sha256-c264084cd34e3952ec4c1f4177c5122251ef39a3af60b7a229851972675706d8?context=repo
>> >>> [2]
>> >>>
>> >>>
>> >>
>> https://hub.docker.com/layers/193227776/lph890127/pulsar-all/2.10.0/images/sha256-7e2297dfabbd1d433198c3bc906c09802394da0d41393f2069e515d7131e4be2?context=repo
>> >>> [3]
>> https://github.com/apache/pulsar/wiki/Release-Candidate-Validation
>> >>>
>> >>
>>
>>


Re: [DISCUSS] PIP-139 : Support Broker send command to real close producer/consumer.

2022-02-24 Thread PengHui Li
> If we want to solve this problem, we need to add some sync resources like
lock/state, I think it’s a harm for us, we don’t need to do that.

I think we can make the namespace/tenants to the inactive state first so
that we can avoid any new
producer/consumer connect to the topic under the namespace/tenant.

The old producer/consumer should be closed after applying the changes from
this proposal.

Thanks,
Penghui

On Tue, Feb 8, 2022 at 5:47 PM mattison chao  wrote:

> > This is supposed to mean that the namespace should be able to be
> > deleted, correct?
>
> Yes, the main background is the user doesn’t have an active topic. so,
> they want to delete the namespace.
>
> > However, I think
> > we might still have a race condition that could make tenant or
> > namespace deletion fail. Specifically, if a new producer or consumer
> > creates a topic after the namespace deletion has started but
> > before it is complete. Do you agree that the underlying race still
> exists?
>
> Yes, this condition exists. I think it’s not a big problem because the
> user doesn’t want to use this namespace anymore.
> If this scenario appears, they will get an error and need to delete it
> again.
>
> > What if we expand our usage of the "terminated" feature to apply to
> > namespaces (and tenants)? Then, a terminated namespace can have
> > bundles and topics can be deleted but not created (just as a terminated
> > topic cannot have any new messages published to it). This would take
> > care of all topic creation race conditions. We'd probably need to add
> > new protobuf exceptions for this feature.
>
>
> If we want to solve this problem, we need to add some sync resources like
> lock/state, I think it’s a harm for us, we don’t need to do that.
>
> Thanks for your suggestions, let me know what you think.
>
> Best,
> Mattison
>
> > On Feb 1, 2022, at 2:26 PM, Michael Marshall 
> wrote:
> >
> > This proposal identifies an important issue that we should definitely
> > solve. I have some questions.
> >
> >> When there are no user-created topics under a namespace,
> >> Namespace should be deleted.
> >
> > This is supposed to mean that the namespace should be able to be
> > deleted, correct?
> >
> >> For this reason, we need to close the system topic reader/producer
> >> first, then remove the system topic. finally, remove the namespace.
> >
> > I agree that expanding the protobuf CloseProducer and CloseConsumer
> > commands could be valuable here. The expansion would ensure that
> > producers and consumers don't attempt to reconnect. However, I think
> > we might still have a race condition that could make tenant or
> > namespace deletion fail. Specifically, if a new producer or consumer
> > creates a topic after the namespace deletion has started but
> > before it is complete. Do you agree that the underlying race still
> exists?
> >
> > In my view, the fundamental problem here is that deleting certain Pulsar
> > resources takes time and, in a distributed system, that means race
> > conditions.
> >
> > What if we expand our usage of the "terminated" feature to apply to
> > namespaces (and tenants)? Then, a terminated namespace can have
> > bundles and topics can be deleted but not created (just as a terminated
> > topic cannot have any new messages published to it). This would take
> > care of all topic creation race conditions. We'd probably need to add
> > new protobuf exceptions for this feature.
> >
> > Thanks,
> > Michael
> >
> >
> > On Sat, Jan 29, 2022 at 7:25 PM Zike Yang
> >  wrote:
> >>
> >> +1
> >>
> >>
> >> Thanks,
> >> Zike
> >>
> >> On Sat, Jan 29, 2022 at 12:30 PM guo jiwei 
> wrote:
> >>>
> >>> Hi
> >>> The PIP link : https://github.com/apache/pulsar/issues/13989
> >>>
> >>> Regards
> >>> Jiwei Guo (Tboy)
> >>>
> >>>
> >>> On Sat, Jan 29, 2022 at 11:46 AM mattison chao  >
> >>> wrote:
> >>>
>  Hello everyone,
> 
>  I want to start a discussion about PIP-139 : Support Broker send
> command
>  to real close producer/consumer.
> 
>  This is the PIP document
> 
>  https://github.com/apache/pulsar/issues/13989 <
>  https://github.com/apache/pulsar/issues/13979>
> 
>  Please check it out and feel free to share your thoughts.
> 
>  Best,
>  Mattison
> 
> 
>   Pasted below for quoting convenience.
> 
> 
> 
>  Relation pull request:  #13337
>  Authors: @Technoboy-  @mattisonchao
> 
>  ## Motivation
> 
>  Before we discuss this pip, I'd like to supplement some context to
> help
>  contributors who don't want to read the original pull request.
> 
> > When there are no user-created topics under a namespace, Namespace
>  should be deleted. But currently, the system topic existed and the
>  reader/producer could auto-create the system which may cause the
> namespace
>  deletion to fail.
> 
>  For this reason, we need to close the system topic reader/producer
> first,
>  then remove the 

[GitHub] [pulsar-manager] hsluoyz commented on pull request #446: Add support for casdoor

2022-02-24 Thread GitBox


hsluoyz commented on pull request #446:
URL: https://github.com/apache/pulsar-manager/pull/446#issuecomment-1049904925


   @tuteng please review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] PIP-139 : Support Broker send command to real close producer/consumer.

2022-02-24 Thread Michael Marshall
> The old producer/consumer should be closed after applying the changes from
> this proposal.

Penghui, are you suggesting that we implement the namespace/tenant
terminated logic after completing this PIP?

For the sake of discussion, if we implement the namespace terminated
logic first, we could fulfill the underlying requirements for this PIP
by returning a new non-retriable error response when a client tries to
connect a producer or a consumer to a topic in a namespace that is
"terminated". If we do add the "namespace terminated" feature, we'll
need to add a non-retriable exception for this case, anyway. The main
advantage here is that we'd only need one expansion of the protobuf
instead of two. The downside is that the protocol for connected
clients has a couple more roundtrips. The broker would disconnect
connected clients and then fail their reconnection attempt with a
non-retriable error.

Thanks,
Michael

On Thu, Feb 24, 2022 at 7:11 AM PengHui Li  wrote:
>
> > If we want to solve this problem, we need to add some sync resources like
> lock/state, I think it’s a harm for us, we don’t need to do that.
>
> I think we can make the namespace/tenants to the inactive state first so
> that we can avoid any new
> producer/consumer connect to the topic under the namespace/tenant.
>
> The old producer/consumer should be closed after applying the changes from
> this proposal.
>
> Thanks,
> Penghui
>
> On Tue, Feb 8, 2022 at 5:47 PM mattison chao  wrote:
>
> > > This is supposed to mean that the namespace should be able to be
> > > deleted, correct?
> >
> > Yes, the main background is the user doesn’t have an active topic. so,
> > they want to delete the namespace.
> >
> > > However, I think
> > > we might still have a race condition that could make tenant or
> > > namespace deletion fail. Specifically, if a new producer or consumer
> > > creates a topic after the namespace deletion has started but
> > > before it is complete. Do you agree that the underlying race still
> > exists?
> >
> > Yes, this condition exists. I think it’s not a big problem because the
> > user doesn’t want to use this namespace anymore.
> > If this scenario appears, they will get an error and need to delete it
> > again.
> >
> > > What if we expand our usage of the "terminated" feature to apply to
> > > namespaces (and tenants)? Then, a terminated namespace can have
> > > bundles and topics can be deleted but not created (just as a terminated
> > > topic cannot have any new messages published to it). This would take
> > > care of all topic creation race conditions. We'd probably need to add
> > > new protobuf exceptions for this feature.
> >
> >
> > If we want to solve this problem, we need to add some sync resources like
> > lock/state, I think it’s a harm for us, we don’t need to do that.
> >
> > Thanks for your suggestions, let me know what you think.
> >
> > Best,
> > Mattison
> >
> > > On Feb 1, 2022, at 2:26 PM, Michael Marshall 
> > wrote:
> > >
> > > This proposal identifies an important issue that we should definitely
> > > solve. I have some questions.
> > >
> > >> When there are no user-created topics under a namespace,
> > >> Namespace should be deleted.
> > >
> > > This is supposed to mean that the namespace should be able to be
> > > deleted, correct?
> > >
> > >> For this reason, we need to close the system topic reader/producer
> > >> first, then remove the system topic. finally, remove the namespace.
> > >
> > > I agree that expanding the protobuf CloseProducer and CloseConsumer
> > > commands could be valuable here. The expansion would ensure that
> > > producers and consumers don't attempt to reconnect. However, I think
> > > we might still have a race condition that could make tenant or
> > > namespace deletion fail. Specifically, if a new producer or consumer
> > > creates a topic after the namespace deletion has started but
> > > before it is complete. Do you agree that the underlying race still
> > exists?
> > >
> > > In my view, the fundamental problem here is that deleting certain Pulsar
> > > resources takes time and, in a distributed system, that means race
> > > conditions.
> > >
> > > What if we expand our usage of the "terminated" feature to apply to
> > > namespaces (and tenants)? Then, a terminated namespace can have
> > > bundles and topics can be deleted but not created (just as a terminated
> > > topic cannot have any new messages published to it). This would take
> > > care of all topic creation race conditions. We'd probably need to add
> > > new protobuf exceptions for this feature.
> > >
> > > Thanks,
> > > Michael
> > >
> > >
> > > On Sat, Jan 29, 2022 at 7:25 PM Zike Yang
> > >  wrote:
> > >>
> > >> +1
> > >>
> > >>
> > >> Thanks,
> > >> Zike
> > >>
> > >> On Sat, Jan 29, 2022 at 12:30 PM guo jiwei 
> > wrote:
> > >>>
> > >>> Hi
> > >>> The PIP link : https://github.com/apache/pulsar/issues/13989
> > >>>
> > >>> Regards
> > >>> Jiwei Guo (Tboy)
> > >>>
> > >>>
> > >>> On Sat, Jan 29, 2022 at 11:

Re: [DISCUSS] PIP-139 : Support Broker send command to real close producer/consumer.

2022-02-24 Thread Dave Fisher
Hi -

I hope I’m understanding what’s being discussed.

If we are going to automatically delete tenants and namespaces for not 
containing topics then we need to make both of these automatic actions 
configurable with a default to NOT do so. Otherwise we break existing use cases.

Automatic deletion of namespaces should be configurable at both the cluster and 
tenant level.

Regards,
Dave

> On Feb 24, 2022, at 2:25 PM, Michael Marshall  wrote:
> 
>> The old producer/consumer should be closed after applying the changes from
>> this proposal.
> 
> Penghui, are you suggesting that we implement the namespace/tenant
> terminated logic after completing this PIP?
> 
> For the sake of discussion, if we implement the namespace terminated
> logic first, we could fulfill the underlying requirements for this PIP
> by returning a new non-retriable error response when a client tries to
> connect a producer or a consumer to a topic in a namespace that is
> "terminated". If we do add the "namespace terminated" feature, we'll
> need to add a non-retriable exception for this case, anyway. The main
> advantage here is that we'd only need one expansion of the protobuf
> instead of two. The downside is that the protocol for connected
> clients has a couple more roundtrips. The broker would disconnect
> connected clients and then fail their reconnection attempt with a
> non-retriable error.
> 
> Thanks,
> Michael
> 
> On Thu, Feb 24, 2022 at 7:11 AM PengHui Li  wrote:
>> 
>>> If we want to solve this problem, we need to add some sync resources like
>> lock/state, I think it’s a harm for us, we don’t need to do that.
>> 
>> I think we can make the namespace/tenants to the inactive state first so
>> that we can avoid any new
>> producer/consumer connect to the topic under the namespace/tenant.
>> 
>> The old producer/consumer should be closed after applying the changes from
>> this proposal.
>> 
>> Thanks,
>> Penghui
>> 
>> On Tue, Feb 8, 2022 at 5:47 PM mattison chao  wrote:
>> 
 This is supposed to mean that the namespace should be able to be
 deleted, correct?
>>> 
>>> Yes, the main background is the user doesn’t have an active topic. so,
>>> they want to delete the namespace.
>>> 
 However, I think
 we might still have a race condition that could make tenant or
 namespace deletion fail. Specifically, if a new producer or consumer
 creates a topic after the namespace deletion has started but
 before it is complete. Do you agree that the underlying race still
>>> exists?
>>> 
>>> Yes, this condition exists. I think it’s not a big problem because the
>>> user doesn’t want to use this namespace anymore.
>>> If this scenario appears, they will get an error and need to delete it
>>> again.
>>> 
 What if we expand our usage of the "terminated" feature to apply to
 namespaces (and tenants)? Then, a terminated namespace can have
 bundles and topics can be deleted but not created (just as a terminated
 topic cannot have any new messages published to it). This would take
 care of all topic creation race conditions. We'd probably need to add
 new protobuf exceptions for this feature.
>>> 
>>> 
>>> If we want to solve this problem, we need to add some sync resources like
>>> lock/state, I think it’s a harm for us, we don’t need to do that.
>>> 
>>> Thanks for your suggestions, let me know what you think.
>>> 
>>> Best,
>>> Mattison
>>> 
 On Feb 1, 2022, at 2:26 PM, Michael Marshall 
>>> wrote:
 
 This proposal identifies an important issue that we should definitely
 solve. I have some questions.
 
> When there are no user-created topics under a namespace,
> Namespace should be deleted.
 
 This is supposed to mean that the namespace should be able to be
 deleted, correct?
 
> For this reason, we need to close the system topic reader/producer
> first, then remove the system topic. finally, remove the namespace.
 
 I agree that expanding the protobuf CloseProducer and CloseConsumer
 commands could be valuable here. The expansion would ensure that
 producers and consumers don't attempt to reconnect. However, I think
 we might still have a race condition that could make tenant or
 namespace deletion fail. Specifically, if a new producer or consumer
 creates a topic after the namespace deletion has started but
 before it is complete. Do you agree that the underlying race still
>>> exists?
 
 In my view, the fundamental problem here is that deleting certain Pulsar
 resources takes time and, in a distributed system, that means race
 conditions.
 
 What if we expand our usage of the "terminated" feature to apply to
 namespaces (and tenants)? Then, a terminated namespace can have
 bundles and topics can be deleted but not created (just as a terminated
 topic cannot have any new messages published to it). This would take
 care of all topic creation race conditions. We'd proba

[GitHub] [pulsar-site] urfreespace commented on a change in pull request #6: config and style updates

2022-02-24 Thread GitBox


urfreespace commented on a change in pull request #6:
URL: https://github.com/apache/pulsar-site/pull/6#discussion_r814412578



##
File path: site2/website-next/src/pages/index.js
##
@@ -129,7 +130,13 @@ export default function Home() {
   ];
   useEffect((d) => {
 startWaves();
-
+var winW = window.outerWidth;

Review comment:
   we can't use `window` directly because the buiding is based on `SSR`, 
you can have a test by `npm run build`, maybe this 
`https://docusaurus.io/docs/docusaurus-core#useIsBrowser` can help @Paul-TT 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] Pulsar Node.js Client Release 1.6.0 Candidate 1

2022-02-24 Thread Hiroyuki Sakai
Hi, Guangning

I can't unpack pulsar-client-node-1.6.0.tar.gz.
Would you please check this file?


$ wget 
https://dist.apache.org/repos/dist/dev/pulsar/pulsar-client-node/pulsar-client-node-1.6.0-candidate-1/pulsar-client-node-1.6.0.tar.gz

$ tar xvzf ./pulsar-client-node-1.6.0.tar.gz
tar: Error opening archive: Unrecognized archive format

Regards,
Hiroyuki


From: Guangning E 
Sent: Tuesday, February 22, 2022 22:52
To: Dev 
Subject: [VOTE] Pulsar Node.js Client Release 1.6.0 Candidate 1

Hi everyone,
Please review and vote on the release candidate #1 for the version 1.6.0,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

This is the first release candidate for Apache Pulsar Node.js client,
version 1.6.0.

It fixes the following issues:
https://github.com/apache/pulsar-client-node/milestone/1?closed=1

Please download the source files and review this release candidate:
- Review release notes
- Download the source package (verify shasum and asc) and follow the
README.md to build and run the Pulsar Node.js client.

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.

Source files:
https://jpn01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist.apache.org%2Frepos%2Fdist%2Fdev%2Fpulsar%2Fpulsar-client-node%2Fpulsar-client-node-1.6.0-candidate-1%2F&data=04%7C01%7Chsakai%40yahoo-corp.jp%7C5ca3f5436f24486e37ae08d9f60a941c%7Ca208d369cd4e4f87b11998eaf31df2c3%7C1%7C0%7C637811347560953506%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=Jb9a5H3J1rVDZp4Ozu7XMMgf%2FoCo6jZCkEuSIT%2FBCIE%3D&reserved=0

Pulsar's KEYS file containing PGP keys we use to sign the release:
https://jpn01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist.apache.org%2Frepos%2Fdist%2Fdev%2Fpulsar%2FKEYS&data=04%7C01%7Chsakai%40yahoo-corp.jp%7C5ca3f5436f24486e37ae08d9f60a941c%7Ca208d369cd4e4f87b11998eaf31df2c3%7C1%7C0%7C637811347560953506%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2FfVY91qOYQMwdDoLZVuF66xcHQE6JV3Ku0SoYx0kwLo%3D&reserved=0

SHA-512 checksum:
64662be31053f76260a6f677bce87a5448f7377f1dae17bfbf69a2947084844ab8e78aee8716ed8dc70a586c1ca160f1d57a5b7e1637a538d2b318bfd62622be
 pulsar-client-node-1.6.0.tar.gz

The tag to be voted upon:
v1.6.0-rc.1
https://github.com/apache/pulsar-client-node/releases/tag/v1.6.0-rc.1


[GitHub] [pulsar-site] urfreespace commented on a change in pull request #6: config and style updates

2022-02-24 Thread GitBox


urfreespace commented on a change in pull request #6:
URL: https://github.com/apache/pulsar-site/pull/6#discussion_r814436861



##
File path: site2/website-next/src/pages/index.js
##
@@ -1,4 +1,5 @@
-import React, { useEffect, componentDidMount } from "react";
+import React, { useEffect } from "react";
+import SineWaves from "sine-waves";

Review comment:
   @Paul-TT It looks like you haven't merge the latest changes to your 
`PR`, please merge the `main` branch into your branch and then submit your `PR`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[VOTE] Pulsar Release 2.8.3 Candidate 3

2022-02-24 Thread Michael Marshall
This is the third release candidate for Apache Pulsar, version 2.8.3.

It fixes the following issues:
https://github.com/apache/pulsar/compare/v2.8.2...v2.8.3-candidate-3

*** Please download, test and vote on this release. This vote will stay open
for at least 72 hours ***

Note that we are voting upon the source (tag), binaries are provided for
convenience.

Source and binary files:
https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.8.3-candidate-3/

There are many checksums and signatures to validate, including
apache-pulsar-2.8.3-bin.tar.gz, apache-pulsar-2.8.3-src.tar.gz,
apache-pulsar-offloaders-2.8.3-bin.tar.gz, and all of the connectors.
All are located here:
https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.8.3-candidate-3/.

Unofficial Docker images:
michaelmarshall/pulsar:2.8.3-rc3
michaelmarshall/pulsar-all:2.8.3-rc3
michaelmarshall/pulsar-standalone:2.8.3-rc3
michaelmarshall/pulsar-grafana:2.8.3-rc3

Maven staging repo:
https://repository.apache.org/content/repositories/orgapachepulsar-1143/

The tag to be voted upon:
v2.8.3-candidate-3 (eba2671080341728f80435a82d2966726168e9da)
https://github.com/apache/pulsar/releases/tag/v2.8.3-candidate-3

Pulsar's KEYS file containing PGP keys we use to sign the release:
https://dist.apache.org/repos/dist/dev/pulsar/KEYS

Please download the source package, and follow the README to build
and run the Pulsar standalone service.


Re: [VOTE] Pulsar Release 2.8.3 Candidate 3

2022-02-24 Thread Jiuming Tao
+1


> 2022年2月25日 上午11:44,Michael Marshall  写道:
> 
> This is the third release candidate for Apache Pulsar, version 2.8.3.
> 
> It fixes the following issues:
> https://github.com/apache/pulsar/compare/v2.8.2...v2.8.3-candidate-3
> 
> *** Please download, test and vote on this release. This vote will stay open
> for at least 72 hours ***
> 
> Note that we are voting upon the source (tag), binaries are provided for
> convenience.
> 
> Source and binary files:
> https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.8.3-candidate-3/
> 
> There are many checksums and signatures to validate, including
> apache-pulsar-2.8.3-bin.tar.gz, apache-pulsar-2.8.3-src.tar.gz,
> apache-pulsar-offloaders-2.8.3-bin.tar.gz, and all of the connectors.
> All are located here:
> https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.8.3-candidate-3/.
> 
> Unofficial Docker images:
> michaelmarshall/pulsar:2.8.3-rc3
> michaelmarshall/pulsar-all:2.8.3-rc3
> michaelmarshall/pulsar-standalone:2.8.3-rc3
> michaelmarshall/pulsar-grafana:2.8.3-rc3
> 
> Maven staging repo:
> https://repository.apache.org/content/repositories/orgapachepulsar-1143/
> 
> The tag to be voted upon:
> v2.8.3-candidate-3 (eba2671080341728f80435a82d2966726168e9da)
> https://github.com/apache/pulsar/releases/tag/v2.8.3-candidate-3
> 
> Pulsar's KEYS file containing PGP keys we use to sign the release:
> https://dist.apache.org/repos/dist/dev/pulsar/KEYS
> 
> Please download the source package, and follow the README to build
> and run the Pulsar standalone service.



[GitHub] [pulsar-manager] JackrayWang commented on issue #447: Deploy from bin package error

2022-02-24 Thread GitBox


JackrayWang commented on issue #447:
URL: https://github.com/apache/pulsar-manager/issues/447#issuecomment-1050495540


   I can run it and use it in my env。But use this shell 
   `cd pulsar-manager
   cp -r ../dist ui
   ./bin/pulsar-manager --redirect.host=http://localhost --redirect.port=9527 
insert.stats.interval=60 --backend.jwt.token=token 
--jwt.broker.token.mode=PRIVATE 
--jwt.broker.private.key=file:///path/broker-private.key 
--jwt.broker.public.key=file:///path/broker-public.key`
   
   [Official website 
link](https://pulsar.apache.org/docs/en/administration-pulsar-manager/)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] PIP-139 : Support Broker send command to real close producer/consumer.

2022-02-24 Thread Michael Marshall
Hi Dave,

> automatically delete tenants and namespaces for not containing topics

I don't think that is what we are discussing. I agree that the initial
email says just that, though, which is why I asked above:

>> When there are no user-created topics under a namespace,
>> Namespace should be deleted.
> This is supposed to mean that the namespace should be able to be
> deleted, correct?

Perhaps Mattison can clarify, too.

My current understanding of the context for the PIP is that a user
call to delete a namespace without force can fail when a producer
reconnects to a deleted topic. The goal is to remove the race
condition to ensure namespace deletion can succeed.

Thanks,
Michael

On Thu, Feb 24, 2022 at 5:21 PM Dave Fisher  wrote:
>
> Hi -
>
> I hope I’m understanding what’s being discussed.
>
> If we are going to automatically delete tenants and namespaces for not 
> containing topics then we need to make both of these automatic actions 
> configurable with a default to NOT do so. Otherwise we break existing use 
> cases.
>
> Automatic deletion of namespaces should be configurable at both the cluster 
> and tenant level.
>
> Regards,
> Dave
>
> > On Feb 24, 2022, at 2:25 PM, Michael Marshall  wrote:
> >
> >> The old producer/consumer should be closed after applying the changes from
> >> this proposal.
> >
> > Penghui, are you suggesting that we implement the namespace/tenant
> > terminated logic after completing this PIP?
> >
> > For the sake of discussion, if we implement the namespace terminated
> > logic first, we could fulfill the underlying requirements for this PIP
> > by returning a new non-retriable error response when a client tries to
> > connect a producer or a consumer to a topic in a namespace that is
> > "terminated". If we do add the "namespace terminated" feature, we'll
> > need to add a non-retriable exception for this case, anyway. The main
> > advantage here is that we'd only need one expansion of the protobuf
> > instead of two. The downside is that the protocol for connected
> > clients has a couple more roundtrips. The broker would disconnect
> > connected clients and then fail their reconnection attempt with a
> > non-retriable error.
> >
> > Thanks,
> > Michael
> >
> > On Thu, Feb 24, 2022 at 7:11 AM PengHui Li  wrote:
> >>
> >>> If we want to solve this problem, we need to add some sync resources like
> >> lock/state, I think it’s a harm for us, we don’t need to do that.
> >>
> >> I think we can make the namespace/tenants to the inactive state first so
> >> that we can avoid any new
> >> producer/consumer connect to the topic under the namespace/tenant.
> >>
> >> The old producer/consumer should be closed after applying the changes from
> >> this proposal.
> >>
> >> Thanks,
> >> Penghui
> >>
> >> On Tue, Feb 8, 2022 at 5:47 PM mattison chao  
> >> wrote:
> >>
>  This is supposed to mean that the namespace should be able to be
>  deleted, correct?
> >>>
> >>> Yes, the main background is the user doesn’t have an active topic. so,
> >>> they want to delete the namespace.
> >>>
>  However, I think
>  we might still have a race condition that could make tenant or
>  namespace deletion fail. Specifically, if a new producer or consumer
>  creates a topic after the namespace deletion has started but
>  before it is complete. Do you agree that the underlying race still
> >>> exists?
> >>>
> >>> Yes, this condition exists. I think it’s not a big problem because the
> >>> user doesn’t want to use this namespace anymore.
> >>> If this scenario appears, they will get an error and need to delete it
> >>> again.
> >>>
>  What if we expand our usage of the "terminated" feature to apply to
>  namespaces (and tenants)? Then, a terminated namespace can have
>  bundles and topics can be deleted but not created (just as a terminated
>  topic cannot have any new messages published to it). This would take
>  care of all topic creation race conditions. We'd probably need to add
>  new protobuf exceptions for this feature.
> >>>
> >>>
> >>> If we want to solve this problem, we need to add some sync resources like
> >>> lock/state, I think it’s a harm for us, we don’t need to do that.
> >>>
> >>> Thanks for your suggestions, let me know what you think.
> >>>
> >>> Best,
> >>> Mattison
> >>>
>  On Feb 1, 2022, at 2:26 PM, Michael Marshall 
> >>> wrote:
> 
>  This proposal identifies an important issue that we should definitely
>  solve. I have some questions.
> 
> > When there are no user-created topics under a namespace,
> > Namespace should be deleted.
> 
>  This is supposed to mean that the namespace should be able to be
>  deleted, correct?
> 
> > For this reason, we need to close the system topic reader/producer
> > first, then remove the system topic. finally, remove the namespace.
> 
>  I agree that expanding the protobuf CloseProducer and CloseConsumer
>  commands could be valuabl

Re: [DISCUSS] PIP-139 : Support Broker send command to real close producer/consumer.

2022-02-24 Thread Michael Marshall
Regarding the namespace "terminated" concept, I just noticed that we
already have a "deleted" field in a namespace's policies [0]. There is
even a comment that says:

> // set the policies to deleted so that somebody else cannot acquire this 
> namespace

I am not familiar with this feature, but it seems like this policy
field could be checked before creating a topic in a namespace. That
would remove certain races described above.

As I think about it more, I no longer think "terminated" is the right
term for what I proposed above. Our goal is to briefly prevent any
topic creation to ensure we can delete all sub resources for a
namespace. On the other hand, a terminated topic isn't necessarily
short lived. If we want to apply the "terminated" term unequivocally
to both topics and namespaces, I think a terminated namespace would
need to be a namespace where all topics are in terminated state and no
additional topics could be created. That's not the feature we're
discussing here, though. Deleted seems like the right term to me,
especially since we're already using it to prevent a race condition
[0].

Thanks,
Michael

[0] 
https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/NamespacesBase.java#L252-L262

On Thu, Feb 24, 2022 at 9:58 PM Michael Marshall  wrote:
>
> Hi Dave,
>
> > automatically delete tenants and namespaces for not containing topics
>
> I don't think that is what we are discussing. I agree that the initial
> email says just that, though, which is why I asked above:
>
> >> When there are no user-created topics under a namespace,
> >> Namespace should be deleted.
> > This is supposed to mean that the namespace should be able to be
> > deleted, correct?
>
> Perhaps Mattison can clarify, too.
>
> My current understanding of the context for the PIP is that a user
> call to delete a namespace without force can fail when a producer
> reconnects to a deleted topic. The goal is to remove the race
> condition to ensure namespace deletion can succeed.
>
> Thanks,
> Michael
>
> On Thu, Feb 24, 2022 at 5:21 PM Dave Fisher  wrote:
> >
> > Hi -
> >
> > I hope I’m understanding what’s being discussed.
> >
> > If we are going to automatically delete tenants and namespaces for not 
> > containing topics then we need to make both of these automatic actions 
> > configurable with a default to NOT do so. Otherwise we break existing use 
> > cases.
> >
> > Automatic deletion of namespaces should be configurable at both the cluster 
> > and tenant level.
> >
> > Regards,
> > Dave
> >
> > > On Feb 24, 2022, at 2:25 PM, Michael Marshall  
> > > wrote:
> > >
> > >> The old producer/consumer should be closed after applying the changes 
> > >> from
> > >> this proposal.
> > >
> > > Penghui, are you suggesting that we implement the namespace/tenant
> > > terminated logic after completing this PIP?
> > >
> > > For the sake of discussion, if we implement the namespace terminated
> > > logic first, we could fulfill the underlying requirements for this PIP
> > > by returning a new non-retriable error response when a client tries to
> > > connect a producer or a consumer to a topic in a namespace that is
> > > "terminated". If we do add the "namespace terminated" feature, we'll
> > > need to add a non-retriable exception for this case, anyway. The main
> > > advantage here is that we'd only need one expansion of the protobuf
> > > instead of two. The downside is that the protocol for connected
> > > clients has a couple more roundtrips. The broker would disconnect
> > > connected clients and then fail their reconnection attempt with a
> > > non-retriable error.
> > >
> > > Thanks,
> > > Michael
> > >
> > > On Thu, Feb 24, 2022 at 7:11 AM PengHui Li  wrote:
> > >>
> > >>> If we want to solve this problem, we need to add some sync resources 
> > >>> like
> > >> lock/state, I think it’s a harm for us, we don’t need to do that.
> > >>
> > >> I think we can make the namespace/tenants to the inactive state first so
> > >> that we can avoid any new
> > >> producer/consumer connect to the topic under the namespace/tenant.
> > >>
> > >> The old producer/consumer should be closed after applying the changes 
> > >> from
> > >> this proposal.
> > >>
> > >> Thanks,
> > >> Penghui
> > >>
> > >> On Tue, Feb 8, 2022 at 5:47 PM mattison chao  
> > >> wrote:
> > >>
> >  This is supposed to mean that the namespace should be able to be
> >  deleted, correct?
> > >>>
> > >>> Yes, the main background is the user doesn’t have an active topic. so,
> > >>> they want to delete the namespace.
> > >>>
> >  However, I think
> >  we might still have a race condition that could make tenant or
> >  namespace deletion fail. Specifically, if a new producer or consumer
> >  creates a topic after the namespace deletion has started but
> >  before it is complete. Do you agree that the underlying race still
> > >>> exists?
> > >>>
> > >>> Yes, this condition exists. I think it’s no

Re: [DISCUSS] PrometheusMetricsServlet performance improvement

2022-02-24 Thread Michael Marshall
I have a historical question. Why do we write and maintain our own
code to generate the metrics response instead of using the prometheus
client library?

> I have learned that the /metrics endpoint will be requested by more than
> one metrics collect system.

In practice, when does this happen?

> PrometheusMetricsGenerator#generate will be invoked once in a period(such
> as 1 minute), the result will be cached and returned for every metrics
> collect request in the period directly.

Since there are tradeoffs to the cache duration, we should make the
period configurable.

Thanks,
Michael

On Wed, Feb 23, 2022 at 11:06 AM Jiuming Tao
 wrote:
>
> Hi all,
> >
> > 2. When there are hundreds MB metrics data collected, it causes high heap 
> > memory usage, high CPU usage and GC pressure. In the 
> > `PrometheusMetricsGenerator#generate` method, it uses 
> > `ByteBufAllocator.DEFAULT.heapBuffer()` to allocate memory for writing 
> > metrics data. The default size of `ByteBufAllocator.DEFAULT.heapBuffer()` 
> > is 256 bytes, when the buffer resizes, the new buffer capacity is 512 
> > bytes(power of 2) and with `mem_copy` operation.
> > If I want to write 100 MB data to the buffer, the current buffer size is 
> > 128 MB, and the total memory usage is close to 256 MB (256bytes + 512 bytes 
> > + 1k +  + 64MB + 128MB). When the buffer size is greater than netty 
> > buffer chunkSize(16 MB), it will be allocated as UnpooledHeapByteBuf in the 
> > heap. After writing metrics data into the buffer, return it to the client 
> > by jetty, jetty will copy it into jetty's buffer with memory allocation in 
> > the heap, again!
> > In this condition, for the purpose of saving memory, avoid high CPU 
> > usage(too much memory allocations and `mem_copy` operations) and reducing 
> > GC pressure, I want to change `ByteBufAllocator.DEFAULT.heapBuffer()` to 
> > `ByteBufAllocator.DEFAULT.compositeDirectBuffer()`, it wouldn't cause 
> > `mem_copy` operations and huge memory allocations(CompositeDirectByteBuf is 
> > a bit slowly in read/write, but it's worth). After writing data, I will 
> > call the `HttpOutput#write(ByteBuffer)` method and write it to the client, 
> > the method won't cause `mem_copy` (I have to wrap ByteBuf to ByteBuffer, if 
> > ByteBuf wrapped, there will be zero-copy).
>
> The jdk in my local is jdk15, I just noticed that in jdk8, ByteBuffer cannot 
> be extended and implemented. So, if allowed, I will write metrics data to 
> temp files and send it to client by jetty’s send_file. It will be turned out 
> a better performance than `CompositeByteBuf`, and takes lower CPU usage due 
> to I/O blocking.(The /metrics endpoint will be a bit slowly, I believe it’s 
> worth).
> If not allowed, it’s no matter and it also has a better performance than 
> `ByteBufAllocator.DEFAULT.heapBuffer()`(see the first image in original mail).
>
> Thanks,
> Tao Jiuming


Re: [DISCUSS] PrometheusMetricsServlet performance improvement

2022-02-24 Thread Jiuming Tao
> I have a historical question. Why do we write and maintain our own
> code to generate the metrics response instead of using the prometheus
> client library?

Old codes, I think at that time, prometheus is not popular yet


>> I have learned that the /metrics endpoint will be requested by more than
>> one metrics collect system.
> 
> In practice, when does this happen?
In cloud, maybe cloud services providers monitors the cluster, and users also 
monitors it.

>> PrometheusMetricsGenerator#generate will be invoked once in a period(such
>> as 1 minute), the result will be cached and returned for every metrics
>> collect request in the period directly.
> 
> Since there are tradeoffs to the cache duration, we should make the
> period configurable.

Yes, of course

> 2022年2月25日 下午12:41,Michael Marshall  写道:
> 
> I have a historical question. Why do we write and maintain our own
> code to generate the metrics response instead of using the prometheus
> client library?
> 
>> I have learned that the /metrics endpoint will be requested by more than
>> one metrics collect system.
> 
> In practice, when does this happen?
> 
>> PrometheusMetricsGenerator#generate will be invoked once in a period(such
>> as 1 minute), the result will be cached and returned for every metrics
>> collect request in the period directly.
> 
> Since there are tradeoffs to the cache duration, we should make the
> period configurable.
> 
> Thanks,
> Michael
> 
> On Wed, Feb 23, 2022 at 11:06 AM Jiuming Tao
>  wrote:
>> 
>> Hi all,
>>> 
>>> 2. When there are hundreds MB metrics data collected, it causes high heap 
>>> memory usage, high CPU usage and GC pressure. In the 
>>> `PrometheusMetricsGenerator#generate` method, it uses 
>>> `ByteBufAllocator.DEFAULT.heapBuffer()` to allocate memory for writing 
>>> metrics data. The default size of `ByteBufAllocator.DEFAULT.heapBuffer()` 
>>> is 256 bytes, when the buffer resizes, the new buffer capacity is 512 
>>> bytes(power of 2) and with `mem_copy` operation.
>>> If I want to write 100 MB data to the buffer, the current buffer size is 
>>> 128 MB, and the total memory usage is close to 256 MB (256bytes + 512 bytes 
>>> + 1k +  + 64MB + 128MB). When the buffer size is greater than netty 
>>> buffer chunkSize(16 MB), it will be allocated as UnpooledHeapByteBuf in the 
>>> heap. After writing metrics data into the buffer, return it to the client 
>>> by jetty, jetty will copy it into jetty's buffer with memory allocation in 
>>> the heap, again!
>>> In this condition, for the purpose of saving memory, avoid high CPU 
>>> usage(too much memory allocations and `mem_copy` operations) and reducing 
>>> GC pressure, I want to change `ByteBufAllocator.DEFAULT.heapBuffer()` to 
>>> `ByteBufAllocator.DEFAULT.compositeDirectBuffer()`, it wouldn't cause 
>>> `mem_copy` operations and huge memory allocations(CompositeDirectByteBuf is 
>>> a bit slowly in read/write, but it's worth). After writing data, I will 
>>> call the `HttpOutput#write(ByteBuffer)` method and write it to the client, 
>>> the method won't cause `mem_copy` (I have to wrap ByteBuf to ByteBuffer, if 
>>> ByteBuf wrapped, there will be zero-copy).
>> 
>> The jdk in my local is jdk15, I just noticed that in jdk8, ByteBuffer cannot 
>> be extended and implemented. So, if allowed, I will write metrics data to 
>> temp files and send it to client by jetty’s send_file. It will be turned out 
>> a better performance than `CompositeByteBuf`, and takes lower CPU usage due 
>> to I/O blocking.(The /metrics endpoint will be a bit slowly, I believe it’s 
>> worth).
>> If not allowed, it’s no matter and it also has a better performance than 
>> `ByteBufAllocator.DEFAULT.heapBuffer()`(see the first image in original 
>> mail).
>> 
>> Thanks,
>> Tao Jiuming



Re: [discuss] prometheus metrics doesn't satisfy with OpenMetrics format

2022-02-24 Thread Michael Marshall
> I am working on bumping Prometheus client to 0.12.0

What is the motivation for the update? If it is security related, that
might help us make a decision here.

> If we want to be compatible with Open Metrics, I suggest adding metrics
> named `_total` in a release version like 2.10.0, and removing the origin
> metric in the next release like 2.11.0.

My main concern with this solution is that it would increase the size
of the metrics payload by double (for the counters). Pulsar already
produces a lot of metrics. (Tangentially, I think we should make the
number of buckets configurable to cut down on the metrics payload
size, as proposed here [0].)

I think this is the kind of change that we can make during a minor
version bump as long as we give users proper notice. The main drawback
is that users will lose historical context for prometheus data from
one version of Pulsar to the next, but since Prometheus is pushing us
to make this change, it's already a change we'll have to make.

Thanks,
Michael

[0] https://github.com/apache/pulsar/issues/12069

On Wed, Feb 23, 2022 at 9:41 AM ZhangJian He  wrote:
>
> ping @enrico @matteo
> Please take a look when you have time.
>
> Thanks
> ZhangJian He
>
> ZhangJian He  于2022年2月13日周日 09:47写道:
>
> > ping @enrico @matteo
> > Please take a look when you have time.
> >
> > Thanks
> > ZhangJian He
> >
> > ZhangJian He  于2022年2月11日周五 14:09写道:
> >
> >> ping @enrico @matteo
> >>
> >> ZhangJian He  于2022年2月8日周二 16:07写道:
> >>
> >>> Sorry for missing the information.
> >>> Before I upgrade the prom client, pulsar metrics is
> >>> ```
> >>>
> >>> - pulsar_connection_closed_total_count
> >>>
> >>> - pulsar_connection_created_total_count
> >>>
> >>> - pulsar_source_received_total_1min
> >>>
> >>> - system_exceptions_total_1min
> >>>
> >>> ```
> >>>
> >>> After
> >>>
> >>> ```
> >>>
> >>> - pulsar_connection_closed_total_count_total
> >>>
> >>> - pulsar_connection_created_total_count_total
> >>>
> >>> - pulsar_source_received_total_1min_total
> >>>
> >>> - system_exceptions_total_1min_total
> >>>
> >>> ```
> >>>
> >>> Prometheus client adds a `_total` suffix in pulsar metrics, because they
> >>> require all counters to have `_total` suffix, if your metric name is
> >>> not ended with `_total`, they will add it.
> >>>
> >>> I believe that the right name which satisfies `OpenMetrics` should be
> >>> ```
> >>>
> >>> - pulsar_connection_closed_total
> >>>
> >>> - pulsar_connection_created_total
> >>>
> >>> - pulsar_source_received_1min_total
> >>>
> >>> - system_exceptions_1min_total
> >>>
> >>> ```
> >>>
> >>> Summary, upgrade prometheus client introduces breaking change for these
> >>> metrics names which did not end with `_total`.
> >>>
> >>>
> >>> PS: If you let the prometheus client add `_total` in the previous
> >>> version, these metrics are not impacted.
> >>>
> >>> Enrico Olivelli  于2022年2月8日周二 15:54写道:
> >>>
>  What happens when you upgrade the Prometheus client ?
> 
>  Can you share some examples of "before" and "after" ?
>  My understanding is that you posted how it looks like "after" the
>  upgrade
> 
>  Thanks for working on this
> 
>  Enrico
> 
>  Il giorno mar 8 feb 2022 alle ore 08:21 ZhangJian He
>   ha scritto:
>  >
>  > Before, I am working on bumping Prometheus client to 0.12.0, but they
>  > introduce a breaking change,
>  > https://github.com/prometheus/client_java/pull/615, adopt the
>  `OpenMetrics
>  > format`, which acquired all counters have `_total` suffix,
>  >
>  > but our metrics now have these metrics, there are not satisfied with
>  the
>  > OpenMetrics format, for example:
>  >
>  > - pulsar_connection_closed_total_count
>  >
>  > - pulsar_connection_created_total_count
>  >
>  > - pulsar_source_received_total_1min
>  >
>  > - system_exceptions_total_1min
>  >
>  >
>  > I want to discuss, Should we adapt the `OpenMetrics format`?
>  >
>  > If we want to be compatible with Open Metrics, I suggest adding
>  metrics
>  > named `_total` in a release version like 2.10.0, and removing the
>  origin
>  > metric in the next release like 2.11.0.
> 
> >>>


Re: [DISCUSS] Dismiss Stale Code Reviews

2022-02-24 Thread Michael Marshall
Closing the loop, we merged the PR to set `dismiss_stale_reviews` to
`false` two days ago.

Thanks,
Michael

On Wed, Feb 23, 2022 at 2:52 AM Li Li  wrote:
>
> +1
>
> > On Feb 23, 2022, at 4:23 PM, Guangning E  wrote:
> >
> > +1
> >
> >
> > Thanks,
> > Guangning
> >
> > Enrico Olivelli  于2022年2月23日周三 16:01写道:
> >
> >> +1
> >>
> >> Enrico
> >>
> >> Il Mer 23 Feb 2022, 07:31 PengHui Li  ha scritto:
> >>
> >>> +1
> >>>
> >>> Before I always thought it was Github added this new feature :)
> >>> Thanks for sharing the great knowledge.
> >>>
> >>> Penghui
> >>>
> >>> On Wed, Feb 23, 2022 at 2:24 PM Michael Marshall 
> >>> wrote:
> >>>
>  Hi All,
> 
>  In my recent PR to update the `.asf.yaml` to protect release branches,
>  I set the `dismiss_stale_reviews` to `true` for PRs targeting master
>  branch [0]. I mistakenly thought this setting would only dismiss PRs
>  updated by force. Instead, all approvals are dismissed when additional
>  commits are added to the PR. The GitHub feature is documented here
>  [1].
> 
>  Since the PR changed the old setting, I want to bring awareness to the
>  change and determine our preferred behavior before changing the
>  setting again.
> 
>  I think we should return to our old setting [2]. The GitHub PR history
>  clearly shows when a contributor/committer approved a PR. I feel that
>  it is up to the "merging" committer to give the final review of the
>  PR's approval history before merging. Further, when dismiss stale code
>  reviews is true, GitHub modifies previous approval "history" in the PR
>  making it look like a reviewer never approved the PR, which I find a
>  bit confusing.
> 
>  Here is a sample PR where approvals were dismissed: [3].
> 
>  Let me know how you think we should proceed.
> 
>  Thanks,
>  Michael
> 
>  [0] https://github.com/apache/pulsar/blob/master/.asf.yaml#L76
>  [1]
> 
> >>>
> >> https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule#creating-a-branch-protection-rule
>  [2] https://github.com/apache/pulsar/pull/14425
>  [3] https://github.com/apache/pulsar/pull/14409
> 
> >>>
> >>
>


Re: [DISCUSS] PrometheusMetricsServlet performance improvement

2022-02-24 Thread Michael Marshall
> Old codes, I think at that time, prometheus is not popular yet

I think there is likely more explanation here, since we could have
switched any time in the past few years when prometheus was already
popular.

Before we add caching to our metrics generation, I think we should
consider migrating to the prometheus client. I can't tell from the
prometheus client documentation whether the client has this caching
feature. If it does, then that is an easy win. If it does, I wonder if
that implies that premetheus endpoints are not meant to be queried too
frequently.

> In cloud, maybe cloud services providers monitors the cluster, and users also 
> monitors it.

Are you able to provide more detail about which cloud service
providers? Is this just a prometheus server scraping metrics?
Regarding users, I would recommend they view prometheus metrics via
prometheus/grafana precisely because it will decrease load on the broker.
I don't mean to be too pedantic, but this whole feature relies on the
premise that brokers are handling frequent calls to the /metrics
endpoint, so I would like to understand the motivation.

Thanks,
Michael



On Thu, Feb 24, 2022 at 10:48 PM Jiuming Tao
 wrote:
>
> > I have a historical question. Why do we write and maintain our own
> > code to generate the metrics response instead of using the prometheus
> > client library?
>
> Old codes, I think at that time, prometheus is not popular yet
>
>
> >> I have learned that the /metrics endpoint will be requested by more than
> >> one metrics collect system.
> >
> > In practice, when does this happen?
> In cloud, maybe cloud services providers monitors the cluster, and users also 
> monitors it.
>
> >> PrometheusMetricsGenerator#generate will be invoked once in a period(such
> >> as 1 minute), the result will be cached and returned for every metrics
> >> collect request in the period directly.
> >
> > Since there are tradeoffs to the cache duration, we should make the
> > period configurable.
>
> Yes, of course
>
> > 2022年2月25日 下午12:41,Michael Marshall  写道:
> >
> > I have a historical question. Why do we write and maintain our own
> > code to generate the metrics response instead of using the prometheus
> > client library?
> >
> >> I have learned that the /metrics endpoint will be requested by more than
> >> one metrics collect system.
> >
> > In practice, when does this happen?
> >
> >> PrometheusMetricsGenerator#generate will be invoked once in a period(such
> >> as 1 minute), the result will be cached and returned for every metrics
> >> collect request in the period directly.
> >
> > Since there are tradeoffs to the cache duration, we should make the
> > period configurable.
> >
> > Thanks,
> > Michael
> >
> > On Wed, Feb 23, 2022 at 11:06 AM Jiuming Tao
> >  wrote:
> >>
> >> Hi all,
> >>>
> >>> 2. When there are hundreds MB metrics data collected, it causes high heap 
> >>> memory usage, high CPU usage and GC pressure. In the 
> >>> `PrometheusMetricsGenerator#generate` method, it uses 
> >>> `ByteBufAllocator.DEFAULT.heapBuffer()` to allocate memory for writing 
> >>> metrics data. The default size of `ByteBufAllocator.DEFAULT.heapBuffer()` 
> >>> is 256 bytes, when the buffer resizes, the new buffer capacity is 512 
> >>> bytes(power of 2) and with `mem_copy` operation.
> >>> If I want to write 100 MB data to the buffer, the current buffer size is 
> >>> 128 MB, and the total memory usage is close to 256 MB (256bytes + 512 
> >>> bytes + 1k +  + 64MB + 128MB). When the buffer size is greater than 
> >>> netty buffer chunkSize(16 MB), it will be allocated as 
> >>> UnpooledHeapByteBuf in the heap. After writing metrics data into the 
> >>> buffer, return it to the client by jetty, jetty will copy it into jetty's 
> >>> buffer with memory allocation in the heap, again!
> >>> In this condition, for the purpose of saving memory, avoid high CPU 
> >>> usage(too much memory allocations and `mem_copy` operations) and reducing 
> >>> GC pressure, I want to change `ByteBufAllocator.DEFAULT.heapBuffer()` to 
> >>> `ByteBufAllocator.DEFAULT.compositeDirectBuffer()`, it wouldn't cause 
> >>> `mem_copy` operations and huge memory allocations(CompositeDirectByteBuf 
> >>> is a bit slowly in read/write, but it's worth). After writing data, I 
> >>> will call the `HttpOutput#write(ByteBuffer)` method and write it to the 
> >>> client, the method won't cause `mem_copy` (I have to wrap ByteBuf to 
> >>> ByteBuffer, if ByteBuf wrapped, there will be zero-copy).
> >>
> >> The jdk in my local is jdk15, I just noticed that in jdk8, ByteBuffer 
> >> cannot be extended and implemented. So, if allowed, I will write metrics 
> >> data to temp files and send it to client by jetty’s send_file. It will be 
> >> turned out a better performance than `CompositeByteBuf`, and takes lower 
> >> CPU usage due to I/O blocking.(The /metrics endpoint will be a bit slowly, 
> >> I believe it’s worth).
> >> If not allowed, it’s no matter and it also has a better performance th

Re: [VOTE] Pulsar Node.js Client Release 1.6.0 Candidate 1

2022-02-24 Thread Guangning E
Thanks, I also noticed another issue with the installation on mac, I'll fix
them and then start a vote for candidate 2

Thanks,
Guangning

Hiroyuki Sakai  于2022年2月25日周五 10:32写道:

> Hi, Guangning
>
> I can't unpack pulsar-client-node-1.6.0.tar.gz.
> Would you please check this file?
>
>
> $ wget
> https://dist.apache.org/repos/dist/dev/pulsar/pulsar-client-node/pulsar-client-node-1.6.0-candidate-1/pulsar-client-node-1.6.0.tar.gz
>
> $ tar xvzf ./pulsar-client-node-1.6.0.tar.gz
> tar: Error opening archive: Unrecognized archive format
>
> Regards,
> Hiroyuki
>
> --
> *From:* Guangning E 
> *Sent:* Tuesday, February 22, 2022 22:52
> *To:* Dev 
> *Subject:* [VOTE] Pulsar Node.js Client Release 1.6.0 Candidate 1
>
> Hi everyone,
> Please review and vote on the release candidate #1 for the version 1.6.0,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This is the first release candidate for Apache Pulsar Node.js client,
> version 1.6.0.
>
> It fixes the following issues:
> https://github.com/apache/pulsar-client-node/milestone/1?closed=1
>
> Please download the source files and review this release candidate:
> - Review release notes
> - Download the source package (verify shasum and asc) and follow the
> README.md to build and run the Pulsar Node.js client.
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Source files:
>
> https://jpn01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist.apache.org%2Frepos%2Fdist%2Fdev%2Fpulsar%2Fpulsar-client-node%2Fpulsar-client-node-1.6.0-candidate-1%2F&data=04%7C01%7Chsakai%40yahoo-corp.jp%7C5ca3f5436f24486e37ae08d9f60a941c%7Ca208d369cd4e4f87b11998eaf31df2c3%7C1%7C0%7C637811347560953506%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=Jb9a5H3J1rVDZp4Ozu7XMMgf%2FoCo6jZCkEuSIT%2FBCIE%3D&reserved=0
>
> Pulsar's KEYS file containing PGP keys we use to sign the release:
>
> https://jpn01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist.apache.org%2Frepos%2Fdist%2Fdev%2Fpulsar%2FKEYS&data=04%7C01%7Chsakai%40yahoo-corp.jp%7C5ca3f5436f24486e37ae08d9f60a941c%7Ca208d369cd4e4f87b11998eaf31df2c3%7C1%7C0%7C637811347560953506%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2FfVY91qOYQMwdDoLZVuF66xcHQE6JV3Ku0SoYx0kwLo%3D&reserved=0
>
> SHA-512 checksum:
>
> 64662be31053f76260a6f677bce87a5448f7377f1dae17bfbf69a2947084844ab8e78aee8716ed8dc70a586c1ca160f1d57a5b7e1637a538d2b318bfd62622be
>  pulsar-client-node-1.6.0.tar.gz
>
> The tag to be voted upon:
> v1.6.0-rc.1
> https://github.com/apache/pulsar-client-node/releases/tag/v1.6.0-rc.1
>


[DISCUSS] Likely Deadlock Scenario In 2.9, Possibly 2.10

2022-02-24 Thread Michael Marshall
Hello Pulsar Community,

I am in the process of investigating what I believe to be a broker
deadlock scenario that affects branch-2.9 and likely branch-2.10. (I
haven't had a chance to check branch-2.8 yet.)

I quickly described some of the details in this issue [0]. The
deadlock is related to waiting on the `metadata-store` thread for a
task to pending task to complete on that same `metadata-store` thread.
It is late for me, so I won't be able to investigate more until
tomorrow.

Since this behavior is in multiple versions of 2.9, we will need to
decide how to handle in-flight release candidates. Given that
2.10.0-candidate-2 hasn't been cut yet, it might be worth delaying
that tag just a little longer until we can get a solution.

I'm sharing before I have all of the details in the interest of
preventing unnecessary release candidates. Hopefully this is helpful.

Thanks,
Michael

[0] https://github.com/apache/pulsar/issues/14438


[GitHub] [pulsar-client-node] tuteng opened a new pull request #196: Fixed pulsar client node mac install script

2022-02-24 Thread GitBox


tuteng opened a new pull request #196:
URL: https://github.com/apache/pulsar-client-node/pull/196


   
   * Fixed pulsar client node install script on mac
   * Update read me doc


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org