Re: [DISCUSS] Dismiss Stale Code Reviews

2022-02-23 Thread Enrico Olivelli
+1

Enrico

Il Mer 23 Feb 2022, 07:31 PengHui Li  ha scritto:

> +1
>
> Before I always thought it was Github added this new feature :)
> Thanks for sharing the great knowledge.
>
> Penghui
>
> On Wed, Feb 23, 2022 at 2:24 PM Michael Marshall 
> wrote:
>
> > Hi All,
> >
> > In my recent PR to update the `.asf.yaml` to protect release branches,
> > I set the `dismiss_stale_reviews` to `true` for PRs targeting master
> > branch [0]. I mistakenly thought this setting would only dismiss PRs
> > updated by force. Instead, all approvals are dismissed when additional
> > commits are added to the PR. The GitHub feature is documented here
> > [1].
> >
> > Since the PR changed the old setting, I want to bring awareness to the
> > change and determine our preferred behavior before changing the
> > setting again.
> >
> > I think we should return to our old setting [2]. The GitHub PR history
> > clearly shows when a contributor/committer approved a PR. I feel that
> > it is up to the "merging" committer to give the final review of the
> > PR's approval history before merging. Further, when dismiss stale code
> > reviews is true, GitHub modifies previous approval "history" in the PR
> > making it look like a reviewer never approved the PR, which I find a
> > bit confusing.
> >
> > Here is a sample PR where approvals were dismissed: [3].
> >
> > Let me know how you think we should proceed.
> >
> > Thanks,
> > Michael
> >
> > [0] https://github.com/apache/pulsar/blob/master/.asf.yaml#L76
> > [1]
> >
> https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule#creating-a-branch-protection-rule
> > [2] https://github.com/apache/pulsar/pull/14425
> > [3] https://github.com/apache/pulsar/pull/14409
> >
>


Re: [DISCUSS] Dismiss Stale Code Reviews

2022-02-23 Thread Guangning E
+1


Thanks,
Guangning

Enrico Olivelli  于2022年2月23日周三 16:01写道:

> +1
>
> Enrico
>
> Il Mer 23 Feb 2022, 07:31 PengHui Li  ha scritto:
>
> > +1
> >
> > Before I always thought it was Github added this new feature :)
> > Thanks for sharing the great knowledge.
> >
> > Penghui
> >
> > On Wed, Feb 23, 2022 at 2:24 PM Michael Marshall 
> > wrote:
> >
> > > Hi All,
> > >
> > > In my recent PR to update the `.asf.yaml` to protect release branches,
> > > I set the `dismiss_stale_reviews` to `true` for PRs targeting master
> > > branch [0]. I mistakenly thought this setting would only dismiss PRs
> > > updated by force. Instead, all approvals are dismissed when additional
> > > commits are added to the PR. The GitHub feature is documented here
> > > [1].
> > >
> > > Since the PR changed the old setting, I want to bring awareness to the
> > > change and determine our preferred behavior before changing the
> > > setting again.
> > >
> > > I think we should return to our old setting [2]. The GitHub PR history
> > > clearly shows when a contributor/committer approved a PR. I feel that
> > > it is up to the "merging" committer to give the final review of the
> > > PR's approval history before merging. Further, when dismiss stale code
> > > reviews is true, GitHub modifies previous approval "history" in the PR
> > > making it look like a reviewer never approved the PR, which I find a
> > > bit confusing.
> > >
> > > Here is a sample PR where approvals were dismissed: [3].
> > >
> > > Let me know how you think we should proceed.
> > >
> > > Thanks,
> > > Michael
> > >
> > > [0] https://github.com/apache/pulsar/blob/master/.asf.yaml#L76
> > > [1]
> > >
> >
> https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule#creating-a-branch-protection-rule
> > > [2] https://github.com/apache/pulsar/pull/14425
> > > [3] https://github.com/apache/pulsar/pull/14409
> > >
> >
>


Re: [DISCUSS] Dismiss Stale Code Reviews

2022-02-23 Thread Li Li
+1

> On Feb 23, 2022, at 4:23 PM, Guangning E  wrote:
> 
> +1
> 
> 
> Thanks,
> Guangning
> 
> Enrico Olivelli  于2022年2月23日周三 16:01写道:
> 
>> +1
>> 
>> Enrico
>> 
>> Il Mer 23 Feb 2022, 07:31 PengHui Li  ha scritto:
>> 
>>> +1
>>> 
>>> Before I always thought it was Github added this new feature :)
>>> Thanks for sharing the great knowledge.
>>> 
>>> Penghui
>>> 
>>> On Wed, Feb 23, 2022 at 2:24 PM Michael Marshall 
>>> wrote:
>>> 
 Hi All,
 
 In my recent PR to update the `.asf.yaml` to protect release branches,
 I set the `dismiss_stale_reviews` to `true` for PRs targeting master
 branch [0]. I mistakenly thought this setting would only dismiss PRs
 updated by force. Instead, all approvals are dismissed when additional
 commits are added to the PR. The GitHub feature is documented here
 [1].
 
 Since the PR changed the old setting, I want to bring awareness to the
 change and determine our preferred behavior before changing the
 setting again.
 
 I think we should return to our old setting [2]. The GitHub PR history
 clearly shows when a contributor/committer approved a PR. I feel that
 it is up to the "merging" committer to give the final review of the
 PR's approval history before merging. Further, when dismiss stale code
 reviews is true, GitHub modifies previous approval "history" in the PR
 making it look like a reviewer never approved the PR, which I find a
 bit confusing.
 
 Here is a sample PR where approvals were dismissed: [3].
 
 Let me know how you think we should proceed.
 
 Thanks,
 Michael
 
 [0] https://github.com/apache/pulsar/blob/master/.asf.yaml#L76
 [1]
 
>>> 
>> https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule#creating-a-branch-protection-rule
 [2] https://github.com/apache/pulsar/pull/14425
 [3] https://github.com/apache/pulsar/pull/14409
 
>>> 
>> 



[GitHub] [pulsar-manager] JackrayWang opened a new issue #447: Deploy from bin package error

2022-02-23 Thread GitBox


JackrayWang opened a new issue #447:
URL: https://github.com/apache/pulsar-manager/issues/447


   pulsar manage server start successfully
   
http://ip:7750/   I can see the logo of manage,but the Chrome window is 
blank。
   
   http://ip:7750/ui/index.html  404


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[DISCUSS] PrometheusMetricsServlet performance improvement

2022-02-23 Thread Jiuming Tao
Hi all,

1. I have learned that the /metrics endpoint will be requested by more than
one metrics collect system. In the condition, I want to reimplement
`PromethuesMetricsServlet` by sliding window.
PrometheusMetricsGenerator#generate will be invoked once in a period(such
as 1 minute), the result will be cached and returned for every metrics
collect request in the period directly. It could save memory and avoid high
CPU usage.

2. When there are hundreds MB metrics data collected, it causes high heap
memory usage, high CPU usage and GC pressure. In the
`PrometheusMetricsGenerator#generate` method, it uses
`ByteBufAllocator.DEFAULT.heapBuffer()` to allocate memory for writing
metrics data. The default size of `ByteBufAllocator.DEFAULT.heapBuffer()`
is 256 bytes, when the buffer resizes, the new buffer capacity is 512
bytes(power of 2) and with `mem_copy` operation.
If I want to write 100 MB data to the buffer, the current buffer size is
128 MB, and the total memory usage is close to 256 MB (256bytes + 512 bytes
+ 1k +  + 64MB + 128MB). When the buffer size is greater than netty
buffer chunkSize(16 MB), it will be allocated as UnpooledHeapByteBuf in the
heap. After writing metrics data into the buffer, return it to the client
by jetty, jetty will copy it into jetty's buffer with memory allocation in
the heap, again!
In this condition, for the purpose of saving memory, avoid high CPU
usage(too much memory allocations and `mem_copy` operations) and reducing
GC pressure, I want to change `ByteBufAllocator.DEFAULT.heapBuffer()` to
`ByteBufAllocator.DEFAULT.compositeDirectBuffer()`, it wouldn't cause
`mem_copy` operations and huge memory allocations(CompositeDirectByteBuf is
a bit slowly in read/write, but it's worth). After writing data, I will
call the `HttpOutput#write(ByteBuffer)` method and write it to the client,
the method won't cause `mem_copy` (I have to wrap ByteBuf to ByteBuffer, if
ByteBuf wrapped, there will be zero-copy).
I tested NO.2 in my local, and it turns out performance is better than the
heap buffer(below images).

https://drive.google.com/file/d/1-0drrs9s9kZ2NbbVmzQDwPHdgtpE6QyW/view?usp=sharing
(CompositeDirectByteBuf)
https://drive.google.com/file/d/1-0m15YdsjBudsiweZ4DO7aU3bOFeK17w/view?usp=sharing
(PooledHeapByteBuf)

Thanks,
Tao Jiuming


[DISCUSSION] Support custom and pluggable consumer selector for key shared subscription type

2022-02-23 Thread zhangao
Hi Pulsar Community, 

When we try to Introduce pulsar to our existing system, compatibility is our 
first consideration.

Firstly, our production system uses jump consistent hash algorithm to select 
consumers, but pulsar uses range consistent hash algorithm natively. It's 
impossible for us to guarantee that the same node can consume data with the 
same routing key from pulsar and our existing system simultaneously.

It's better pulsar supports custom consumer selector when using key shared 
subscription. 



Thanks
Zhangao




[1] https://github.com/apache/pulsar/issues/13473 [2] 
https://github.com/apache/pulsar/pull/13470

Re: [DISCUSS] PrometheusMetricsServlet performance improvement

2022-02-23 Thread Enrico Olivelli
Cool

I also observed such problems but I haven't time to work on a proposal.

Looking forward to see your patch


Enrico

Il Mer 23 Feb 2022, 10:43 Jiuming Tao  ha
scritto:

> Hi all,
>
> 1. I have learned that the /metrics endpoint will be requested by more than
> one metrics collect system. In the condition, I want to reimplement
> `PromethuesMetricsServlet` by sliding window.
> PrometheusMetricsGenerator#generate will be invoked once in a period(such
> as 1 minute), the result will be cached and returned for every metrics
> collect request in the period directly. It could save memory and avoid high
> CPU usage.
>
> 2. When there are hundreds MB metrics data collected, it causes high heap
> memory usage, high CPU usage and GC pressure. In the
> `PrometheusMetricsGenerator#generate` method, it uses
> `ByteBufAllocator.DEFAULT.heapBuffer()` to allocate memory for writing
> metrics data. The default size of `ByteBufAllocator.DEFAULT.heapBuffer()`
> is 256 bytes, when the buffer resizes, the new buffer capacity is 512
> bytes(power of 2) and with `mem_copy` operation.
> If I want to write 100 MB data to the buffer, the current buffer size is
> 128 MB, and the total memory usage is close to 256 MB (256bytes + 512 bytes
> + 1k +  + 64MB + 128MB). When the buffer size is greater than netty
> buffer chunkSize(16 MB), it will be allocated as UnpooledHeapByteBuf in the
> heap. After writing metrics data into the buffer, return it to the client
> by jetty, jetty will copy it into jetty's buffer with memory allocation in
> the heap, again!
> In this condition, for the purpose of saving memory, avoid high CPU
> usage(too much memory allocations and `mem_copy` operations) and reducing
> GC pressure, I want to change `ByteBufAllocator.DEFAULT.heapBuffer()` to
> `ByteBufAllocator.DEFAULT.compositeDirectBuffer()`, it wouldn't cause
> `mem_copy` operations and huge memory allocations(CompositeDirectByteBuf is
> a bit slowly in read/write, but it's worth). After writing data, I will
> call the `HttpOutput#write(ByteBuffer)` method and write it to the client,
> the method won't cause `mem_copy` (I have to wrap ByteBuf to ByteBuffer, if
> ByteBuf wrapped, there will be zero-copy).
> I tested NO.2 in my local, and it turns out performance is better than the
> heap buffer(below images).
>
>
> https://drive.google.com/file/d/1-0drrs9s9kZ2NbbVmzQDwPHdgtpE6QyW/view?usp=sharing
> (CompositeDirectByteBuf)
>
> https://drive.google.com/file/d/1-0m15YdsjBudsiweZ4DO7aU3bOFeK17w/view?usp=sharing
> (PooledHeapByteBuf)
>
> Thanks,
> Tao Jiuming
>


New Pulsar Manager Release

2022-02-23 Thread Thomas O'Neill
Is it possible to create a new release 0.3.0 of pulsar-manager?  There have
been a few changes since 0.2.0 including the Log4j2 vulnerability, and it
would be nice to have these changes in the docker image.

-- 



[image: New Innovations, Inc.] 

Thomas O'Neill

DevOps Engineer
Phone: 330.899.9954 <((DirectPhone))>


Re: [DISCUSS] PrometheusMetricsServlet performance improvement

2022-02-23 Thread PengHui Li
+1

Great work.

Thanks,
Penghui

On Wed, Feb 23, 2022 at 8:56 PM Enrico Olivelli  wrote:

> Cool
>
> I also observed such problems but I haven't time to work on a proposal.
>
> Looking forward to see your patch
>
>
> Enrico
>
> Il Mer 23 Feb 2022, 10:43 Jiuming Tao  ha
> scritto:
>
> > Hi all,
> >
> > 1. I have learned that the /metrics endpoint will be requested by more
> than
> > one metrics collect system. In the condition, I want to reimplement
> > `PromethuesMetricsServlet` by sliding window.
> > PrometheusMetricsGenerator#generate will be invoked once in a period(such
> > as 1 minute), the result will be cached and returned for every metrics
> > collect request in the period directly. It could save memory and avoid
> high
> > CPU usage.
> >
> > 2. When there are hundreds MB metrics data collected, it causes high heap
> > memory usage, high CPU usage and GC pressure. In the
> > `PrometheusMetricsGenerator#generate` method, it uses
> > `ByteBufAllocator.DEFAULT.heapBuffer()` to allocate memory for writing
> > metrics data. The default size of `ByteBufAllocator.DEFAULT.heapBuffer()`
> > is 256 bytes, when the buffer resizes, the new buffer capacity is 512
> > bytes(power of 2) and with `mem_copy` operation.
> > If I want to write 100 MB data to the buffer, the current buffer size is
> > 128 MB, and the total memory usage is close to 256 MB (256bytes + 512
> bytes
> > + 1k +  + 64MB + 128MB). When the buffer size is greater than netty
> > buffer chunkSize(16 MB), it will be allocated as UnpooledHeapByteBuf in
> the
> > heap. After writing metrics data into the buffer, return it to the client
> > by jetty, jetty will copy it into jetty's buffer with memory allocation
> in
> > the heap, again!
> > In this condition, for the purpose of saving memory, avoid high CPU
> > usage(too much memory allocations and `mem_copy` operations) and reducing
> > GC pressure, I want to change `ByteBufAllocator.DEFAULT.heapBuffer()` to
> > `ByteBufAllocator.DEFAULT.compositeDirectBuffer()`, it wouldn't cause
> > `mem_copy` operations and huge memory allocations(CompositeDirectByteBuf
> is
> > a bit slowly in read/write, but it's worth). After writing data, I will
> > call the `HttpOutput#write(ByteBuffer)` method and write it to the
> client,
> > the method won't cause `mem_copy` (I have to wrap ByteBuf to ByteBuffer,
> if
> > ByteBuf wrapped, there will be zero-copy).
> > I tested NO.2 in my local, and it turns out performance is better than
> the
> > heap buffer(below images).
> >
> >
> >
> https://drive.google.com/file/d/1-0drrs9s9kZ2NbbVmzQDwPHdgtpE6QyW/view?usp=sharing
> > (CompositeDirectByteBuf)
> >
> >
> https://drive.google.com/file/d/1-0m15YdsjBudsiweZ4DO7aU3bOFeK17w/view?usp=sharing
> > (PooledHeapByteBuf)
> >
> > Thanks,
> > Tao Jiuming
> >
>


Re: [discuss] prometheus metrics doesn't satisfy with OpenMetrics format

2022-02-23 Thread ZhangJian He
ping @enrico @matteo
Please take a look when you have time.

Thanks
ZhangJian He

ZhangJian He  于2022年2月13日周日 09:47写道:

> ping @enrico @matteo
> Please take a look when you have time.
>
> Thanks
> ZhangJian He
>
> ZhangJian He  于2022年2月11日周五 14:09写道:
>
>> ping @enrico @matteo
>>
>> ZhangJian He  于2022年2月8日周二 16:07写道:
>>
>>> Sorry for missing the information.
>>> Before I upgrade the prom client, pulsar metrics is
>>> ```
>>>
>>> - pulsar_connection_closed_total_count
>>>
>>> - pulsar_connection_created_total_count
>>>
>>> - pulsar_source_received_total_1min
>>>
>>> - system_exceptions_total_1min
>>>
>>> ```
>>>
>>> After
>>>
>>> ```
>>>
>>> - pulsar_connection_closed_total_count_total
>>>
>>> - pulsar_connection_created_total_count_total
>>>
>>> - pulsar_source_received_total_1min_total
>>>
>>> - system_exceptions_total_1min_total
>>>
>>> ```
>>>
>>> Prometheus client adds a `_total` suffix in pulsar metrics, because they
>>> require all counters to have `_total` suffix, if your metric name is
>>> not ended with `_total`, they will add it.
>>>
>>> I believe that the right name which satisfies `OpenMetrics` should be
>>> ```
>>>
>>> - pulsar_connection_closed_total
>>>
>>> - pulsar_connection_created_total
>>>
>>> - pulsar_source_received_1min_total
>>>
>>> - system_exceptions_1min_total
>>>
>>> ```
>>>
>>> Summary, upgrade prometheus client introduces breaking change for these
>>> metrics names which did not end with `_total`.
>>>
>>>
>>> PS: If you let the prometheus client add `_total` in the previous
>>> version, these metrics are not impacted.
>>>
>>> Enrico Olivelli  于2022年2月8日周二 15:54写道:
>>>
 What happens when you upgrade the Prometheus client ?

 Can you share some examples of "before" and "after" ?
 My understanding is that you posted how it looks like "after" the
 upgrade

 Thanks for working on this

 Enrico

 Il giorno mar 8 feb 2022 alle ore 08:21 ZhangJian He
  ha scritto:
 >
 > Before, I am working on bumping Prometheus client to 0.12.0, but they
 > introduce a breaking change,
 > https://github.com/prometheus/client_java/pull/615, adopt the
 `OpenMetrics
 > format`, which acquired all counters have `_total` suffix,
 >
 > but our metrics now have these metrics, there are not satisfied with
 the
 > OpenMetrics format, for example:
 >
 > - pulsar_connection_closed_total_count
 >
 > - pulsar_connection_created_total_count
 >
 > - pulsar_source_received_total_1min
 >
 > - system_exceptions_total_1min
 >
 >
 > I want to discuss, Should we adapt the `OpenMetrics format`?
 >
 > If we want to be compatible with Open Metrics, I suggest adding
 metrics
 > named `_total` in a release version like 2.10.0, and removing the
 origin
 > metric in the next release like 2.11.0.

>>>


[GitHub] [pulsar-helm-chart] sijie merged pull request #235: Remove completed init jobs using ttl

2022-02-23 Thread GitBox


sijie merged pull request #235:
URL: https://github.com/apache/pulsar-helm-chart/pull/235


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] PrometheusMetricsServlet performance improvement

2022-02-23 Thread Jiuming Tao
Hi all,
> 
> 2. When there are hundreds MB metrics data collected, it causes high heap 
> memory usage, high CPU usage and GC pressure. In the 
> `PrometheusMetricsGenerator#generate` method, it uses 
> `ByteBufAllocator.DEFAULT.heapBuffer()` to allocate memory for writing 
> metrics data. The default size of `ByteBufAllocator.DEFAULT.heapBuffer()` is 
> 256 bytes, when the buffer resizes, the new buffer capacity is 512 
> bytes(power of 2) and with `mem_copy` operation.
> If I want to write 100 MB data to the buffer, the current buffer size is 128 
> MB, and the total memory usage is close to 256 MB (256bytes + 512 bytes + 1k 
> +  + 64MB + 128MB). When the buffer size is greater than netty buffer 
> chunkSize(16 MB), it will be allocated as UnpooledHeapByteBuf in the heap. 
> After writing metrics data into the buffer, return it to the client by jetty, 
> jetty will copy it into jetty's buffer with memory allocation in the heap, 
> again!
> In this condition, for the purpose of saving memory, avoid high CPU usage(too 
> much memory allocations and `mem_copy` operations) and reducing GC pressure, 
> I want to change `ByteBufAllocator.DEFAULT.heapBuffer()` to 
> `ByteBufAllocator.DEFAULT.compositeDirectBuffer()`, it wouldn't cause 
> `mem_copy` operations and huge memory allocations(CompositeDirectByteBuf is a 
> bit slowly in read/write, but it's worth). After writing data, I will call 
> the `HttpOutput#write(ByteBuffer)` method and write it to the client, the 
> method won't cause `mem_copy` (I have to wrap ByteBuf to ByteBuffer, if 
> ByteBuf wrapped, there will be zero-copy).

The jdk in my local is jdk15, I just noticed that in jdk8, ByteBuffer cannot be 
extended and implemented. So, if allowed, I will write metrics data to temp 
files and send it to client by jetty’s send_file. It will be turned out a 
better performance than `CompositeByteBuf`, and takes lower CPU usage due to 
I/O blocking.(The /metrics endpoint will be a bit slowly, I believe it’s worth).
If not allowed, it’s no matter and it also has a better performance than 
`ByteBufAllocator.DEFAULT.heapBuffer()`(see the first image in original mail). 

Thanks,
Tao Jiuming

RE: [DISCUSS] Add icebox label for issues and PRs that have been inactive for more than 4 weeks

2022-02-23 Thread * yaalsn
Hi All,

This pr https://github.com/apache/pulsar/pull/14390 can help us. If any issue 
or pr had no activity for 30 days, Github action will tag a Stale label with it.

On 2022/01/12 16:15:16 PengHui Li wrote:
> Hi Pulsar Community,
> 
> I want to start a discussion about introducing an icebox label that can be
> added to
> the issue or PR by pulsar bot automatically to help us can focus on the
> active PRs
> and issue. To avoid missing merge PRs, review PRs, triage issues.
> 
> It looks like the following:
> 
> 1. If the issue or PR is inactive for more than 4 weeks, the pulsar bot add
> the icebox label
> 2. If the issue or PR is re-active again, the pulsar bot remove the icebox
> label
> 
> How to determine the PR or issue is inactive?
> 
> 1. No comments for 4 weeks.
> 2. No code review(approve, comment, or change request) for 4 weeks.
> 3. No commits for 4 weeks.
> 4. No description update for 4 weeks.
> 
> How to determine the PR or issue is re-inactive?
> 
> With the icebox label first and:
> 
> 1. New comment added
> 2. New commits pushed
> 3. Description updated
> 4. New code review updates
> 
> Note: all the approved PRs we should not add the icebox label
> 
> This will help us to focus on the active issues and PRs so that we can
> track the active issues and PRs better first. After we get this part done
> (maybe keep active opened PR under 20 and active opened issue under 50?),
> we can move forward to continue to handle the stale PRs (already discussed
> in https://lists.apache.org/thread/k7lyw0q0fyc729w0fqlj5vqng5ny63f2).
> 
> Thanks,
> Penghui
> 

[GitHub] [pulsar-site] Paul-TT opened a new pull request #6: config and style updates

2022-02-23 Thread GitBox


Paul-TT opened a new pull request #6:
URL: https://github.com/apache/pulsar-site/pull/6


   I left some config updates out of my previous pull request.  I also 
corrected a few styles that were pointed out to me.  
   
   I added the sineWaves import back to the index.js file because it was 
failing without it.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] Add icebox label for issues and PRs that have been inactive for more than 4 weeks

2022-02-23 Thread PengHui Li
> This pr https://github.com/apache/pulsar/pull/14390 can help us. If any
issue or pr had no activity for 30 days, Github action will tag a Stale
label with it.

Thanks for the great work, I have merged the PR.

Penghui

On Thu, Feb 24, 2022 at 1:11 AM * yaalsn  wrote:

> Hi All,
>
> This pr https://github.com/apache/pulsar/pull/14390 can help us. If any
> issue or pr had no activity for 30 days, Github action will tag a Stale
> label with it.
>
> On 2022/01/12 16:15:16 PengHui Li wrote:
> > Hi Pulsar Community,
> >
> > I want to start a discussion about introducing an icebox label that can
> be
> > added to
> > the issue or PR by pulsar bot automatically to help us can focus on the
> > active PRs
> > and issue. To avoid missing merge PRs, review PRs, triage issues.
> >
> > It looks like the following:
> >
> > 1. If the issue or PR is inactive for more than 4 weeks, the pulsar bot
> add
> > the icebox label
> > 2. If the issue or PR is re-active again, the pulsar bot remove the
> icebox
> > label
> >
> > How to determine the PR or issue is inactive?
> >
> > 1. No comments for 4 weeks.
> > 2. No code review(approve, comment, or change request) for 4 weeks.
> > 3. No commits for 4 weeks.
> > 4. No description update for 4 weeks.
> >
> > How to determine the PR or issue is re-inactive?
> >
> > With the icebox label first and:
> >
> > 1. New comment added
> > 2. New commits pushed
> > 3. Description updated
> > 4. New code review updates
> >
> > Note: all the approved PRs we should not add the icebox label
> >
> > This will help us to focus on the active issues and PRs so that we can
> > track the active issues and PRs better first. After we get this part done
> > (maybe keep active opened PR under 20 and active opened issue under 50?),
> > we can move forward to continue to handle the stale PRs (already
> discussed
> > in https://lists.apache.org/thread/k7lyw0q0fyc729w0fqlj5vqng5ny63f2).
> >
> > Thanks,
> > Penghui
> >


Re: New Pulsar Manager Release

2022-02-23 Thread Li Li
+1, it’s good idea, I will deal with it.

> On Feb 23, 2022, at 9:39 PM, Thomas O'Neill  wrote:
> 
> Is it possible to create a new release 0.3.0 of pulsar-manager?  There have
> been a few changes since 0.2.0 including the Log4j2 vulnerability, and it
> would be nice to have these changes in the docker image.
> 
> -- 
> 
> 
> 
> [image: New Innovations, Inc.] 
> 
> Thomas O'Neill
> 
> DevOps Engineer
> Phone: 330.899.9954 <((DirectPhone))>



Re: [ANNOUNCE] Apache Pulsar Go Client 0.8.0 released

2022-02-23 Thread Jia Zhai
Cong~. Thanks for the great work. rxl.

On Wed, Feb 23, 2022 at 2:35 PM Enrico Olivelli  wrote:

> Il Mar 22 Feb 2022, 22:55 Matteo Merli  ha
> scritto:
>
> > It was released correctly, same as all the prev releases:
> >
> https://dist.apache.org/repos/dist/release/pulsar/pulsar-client-go-0.8.0/
>
>
> Thanks
> Enrico
>
>
> > --
> > Matteo Merli
> > 
> >
> > On Mon, Feb 21, 2022 at 1:03 PM Enrico Olivelli 
> > wrote:
> > >
> > > Hi,
> > > Did you store the source code tarball on the dist.apache.org website?
> > >
> > > For a valid Apache release we must release the source tarball in the
> > > official repo.
> > >
> > >
> > > Enrico
> > >
> > > Il Lun 21 Feb 2022, 04:25 r...@apache.org  ha
> > > scritto:
> > >
> > > > The Apache Pulsar team is proud to announce Apache Pulsar Go Client
> > version
> > > > 0.8.0.
> > > >
> > > > Pulsar is a highly scalable, low latency messaging platform running
> on
> > > > commodity hardware. It provides simple pub-sub semantics over topics,
> > > > guaranteed at-least-once delivery of messages, automatic cursor
> > management
> > > > for
> > > > subscribers, and cross-datacenter replication.
> > > >
> > > > For Pulsar release details and downloads, visit:
> > > > https://github.com/apache/pulsar-client-go/releases/tag/v0.8.0
> > > >
> > > > Release Notes are at:
> > > > https://github.com/apache/pulsar-client-go/blob/master/CHANGELOG.md
> > > >
> > > > We would like to thank the contributors that made the release
> possible.
> > > >
> > > > Regards,
> > > >
> > > > The Pulsar Team
> > > >
> >
>