Re: [ANNOUNCE] Apache Pulsar 2.7.3 released

2021-08-11 Thread Enrico Olivelli
Great !
thank you Cong Bo

Enrico

Il giorno mer 11 ago 2021 alle ore 08:33 Jinfeng Huang 
ha scritto:

> Super excited to hear about it.
> Thank you very much for your efforts~
>
> Best Regards,
> Jennifer
>
>
> On Wed, Aug 11, 2021 at 12:35 PM Yu Liu 
> wrote:
>
>> Hi Congbo,
>> Thanks for your great work!
>>
>> Hi all,
>> The PR of the 2.7.3 announcement blog has been merged but not shown on the
>> Pulsar website. Will keep you updated once it is available on the website.
>>
>> On Wed, Aug 11, 2021 at 11:21 AM r...@apache.org > >
>> wrote:
>>
>> > Cool, thanks congbo
>> >
>> > --
>> > Thanks
>> > Xiaolong Ran
>> >
>> > 丛搏  于2021年8月11日周三 上午9:10写道:
>> >
>> > > The Apache Pulsar team is proud to announce Apache Pulsar version
>> 2.7.3.
>> > >
>> > > Pulsar is a highly scalable, low latency messaging platform running on
>> > > commodity hardware. It provides simple pub-sub semantics over topics,
>> > > guaranteed at-least-once delivery of messages, automatic cursor
>> > management
>> > > for
>> > > subscribers, and cross-datacenter replication.
>> > >
>> > > For Pulsar release details and downloads, visit:
>> > >
>> > > https://pulsar.apache.org/download
>> > >
>> > > Release Notes are at:
>> > > http://pulsar.apache.org/release-notes
>> > >
>> > > We would like to thank the contributors that made the release
>> possible.
>> > >
>> > > Regards,
>> > >
>> > > The Pulsar Team
>> > >
>> >
>>
>


Re: Lack of retries on TooManyRequests

2021-08-11 Thread Rajan Dhabalia
*The history behind introducing TooManyRequest error is to handle
backpressure for zookeeper by throttling a large number of concurrent
topics loading during broker cold restart. Therefore, pulsar has lookup
throttling at both client and server-side that slows down lookup because
lookup ultimately triggers topic loading at server side. So, when a client
sees TooManyRequest errors, the client should retry to perform this
operation and the client will eventually reconnect to the broker,
TooManyRequest can not harm the broker because broker already has a
safeguard to reject the flood of the requests. I am not sure what problem
https://github.com/apache/pulsar/pull/6584
 PR tries to solve but it
should not solve it by making TooManyRequest non-retriable. TooManyRequest
is a retriable error and the client should retry. Also, it should
definitely not close the producer/consumer due to this error otherwise it
can bring down the entire application which depends on the availability of
the pulsar client entities.Pulsar lookup is an operation similar to other
operations such as: connect, publish, subscribe, etc. So, I don’t think it
needs special treatment with a separate timeout config and we can avoid the
complexity introduced in PR #11627 that caches and depends on the
previously seen exception for lookup retry. Anyways, removing
TooManyRequest from the non-retriable error list will simplify the client
behavior and we can avoid the complexity of PR: #11627
Thanks,Rajan*

On Mon, Aug 9, 2021 at 12:54 AM Ivan Kelly  wrote:

> > Suppose you have about a million topics and your Pulsar cluster goes down
> > (say, ZK down). ..many millions of producers and consumers are now
> > anxiously awaiting the cluster to comeback. .. fun experience for the
> first
> > broker that comes up.   Don't ask me how I know,  ref blame
> > ServerCnx.java#L429
> > <
> https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/ServerCnx.java#L429
> >.
> > The broker limit was added to get through a cold restart.
>
> Ok. Makes sense. The scenarios we've been seeing issues with have had
> modest numbers of topics.
>
> -Ivan
>


Removing the usage of Reflection from DefaultImplementation

2021-08-11 Thread Enrico Olivelli
Hello,
I have sent a PR in order to clean up the DefaultImplementation class and
make it simpler, in a way that we are no more using reflection for each
call to DefaultImplementation class.

Summary of the contents of the patch:
- introduce an interface, PulsarClientImplementationBinding, that defines
the binding between the API and the Impl (this interface lives in the "api"
module)
- load the implementation PulsarClientImplementationBindingImpl (that lives
in the "impl" module) only once, during Client API bootstrap
- the  PulsarClientImplementationBindingImpl class accesses Pulsar Impl
classes directly, without Java reflection

With this change:
- it will be easier to work on the Pulsar Client, as there is no more that
indirection, that used "strings", so the IDE could not understand the
relationships between the classes
- we are no more hiding "exception" throws by the DefaultImplementation
- at runtime we will save CPU cycles
- at runtime there will be no more uncertainty about the "classloader" who
is loading the Implementation class (so we do not need to rely on Context
ClassLoader or other tricks)


More details in the PR
https://github.com/apache/pulsar/pull/11636

Please take a look, I believe it is a good step forward, in many directions

Regards
Enrico


Re: Lack of retries on TooManyRequests

2021-08-11 Thread Ivan Kelly
Thank Rajan, will reply on the PR.
https://github.com/apache/pulsar/pull/11627/

On Wed, Aug 11, 2021 at 10:06 AM Rajan Dhabalia  wrote:
>
> *The history behind introducing TooManyRequest error is to handle
> backpressure for zookeeper by throttling a large number of concurrent
> topics loading during broker cold restart. Therefore, pulsar has lookup
> throttling at both client and server-side that slows down lookup because
> lookup ultimately triggers topic loading at server side. So, when a client
> sees TooManyRequest errors, the client should retry to perform this
> operation and the client will eventually reconnect to the broker,
> TooManyRequest can not harm the broker because broker already has a
> safeguard to reject the flood of the requests. I am not sure what problem
> https://github.com/apache/pulsar/pull/6584
>  PR tries to solve but it
> should not solve it by making TooManyRequest non-retriable. TooManyRequest
> is a retriable error and the client should retry. Also, it should
> definitely not close the producer/consumer due to this error otherwise it
> can bring down the entire application which depends on the availability of
> the pulsar client entities.Pulsar lookup is an operation similar to other
> operations such as: connect, publish, subscribe, etc. So, I don’t think it
> needs special treatment with a separate timeout config and we can avoid the
> complexity introduced in PR #11627 that caches and depends on the
> previously seen exception for lookup retry. Anyways, removing
> TooManyRequest from the non-retriable error list will simplify the client
> behavior and we can avoid the complexity of PR: #11627
> Thanks,Rajan*
>
> On Mon, Aug 9, 2021 at 12:54 AM Ivan Kelly  wrote:
>
> > > Suppose you have about a million topics and your Pulsar cluster goes down
> > > (say, ZK down). ..many millions of producers and consumers are now
> > > anxiously awaiting the cluster to comeback. .. fun experience for the
> > first
> > > broker that comes up.   Don't ask me how I know,  ref blame
> > > ServerCnx.java#L429
> > > <
> > https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/ServerCnx.java#L429
> > >.
> > > The broker limit was added to get through a cold restart.
> >
> > Ok. Makes sense. The scenarios we've been seeing issues with have had
> > modest numbers of topics.
> >
> > -Ivan
> >


[GitHub] [pulsar-manager] hollander-cegeka opened a new issue #407: OpenShift OKD will not run the pulsar-manager container as a root user

2021-08-11 Thread GitBox


hollander-cegeka opened a new issue #407:
URL: https://github.com/apache/pulsar-manager/issues/407


   OpenShift OKD does not support running containers as a root user. Is it 
possible to change the Dockerfile in a way that is it not running as a root 
user? Otherwise these errors will occur:
   
   `
   Starting Pulsar Manager Front end
   nginx: [alert] could not open error log file: open() 
"/var/lib/nginx/logs/error.log" failed (13: Permission denied)
   2021/08/11 11:27:15 [warn] 8#8: the "user" directive makes sense only if the 
master process runs with super-user privileges, ignored in 
/etc/nginx/nginx.conf:3
   2021/08/11 11:27:15 [emerg] 8#8: mkdir() "/var/tmp/nginx/client_body" failed 
(13: Permission denied)
   Starting Pulsar Manager Back end
   touch: /pulsar-manager/supervisor.sock: Permission denied
   chmod: /pulsar-manager/supervisor.sock: No such file or directory
   Start Pulsar Manager by specifying a configuration file.
   Traceback (most recent call last):
 File "/usr/bin/supervisord", line 11, in 
   load_entry_point('supervisor==3.3.4', 'console_scripts', 'supervisord')()
 File "/usr/lib/python2.7/site-packages/supervisor/supervisord.py", line 
357, in main
   go(options)
 File "/usr/lib/python2.7/site-packages/supervisor/supervisord.py", line 
367, in go
   d.main()
 File "/usr/lib/python2.7/site-packages/supervisor/supervisord.py", line 
71, in main
   self.options.make_logger()
 File "/usr/lib/python2.7/site-packages/supervisor/options.py", line 1423, 
in make_logger
   stdout = self.nodaemon,
 File "/usr/lib/python2.7/site-packages/supervisor/loggers.py", line 346, 
in getLogger
   handlers.append(RotatingFileHandler(filename,'a',maxbytes,backups))
 File "/usr/lib/python2.7/site-packages/supervisor/loggers.py", line 172, 
in __init__
   FileHandler.__init__(self, filename, mode)
 File "/usr/lib/python2.7/site-packages/supervisor/loggers.py", line 98, in 
__init__
   self.stream = open(filename, mode)
   IOError: [Errno 13] Permission denied: '/pulsar-manager/supervisord.log'
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [ANNOUNCE] Apache Pulsar 2.7.3 released

2021-08-11 Thread Yu Liu
Hi all,

For Pulsar 2.7.3, 34 contributors provided improvements and bug fixes that
delivered 79 commits.

Highlights
- Cursor reads adhere to the dispatch byte rate limiter setting and no
longer cause unexpected results. PR-11249

- The ledger rollover scheduled task runs as expected. PR-11226


For details of the most noteworthy changes, see Pulsar 2.7.3 announcement
blog .


On Wed, Aug 11, 2021 at 12:34 PM Yu Liu  wrote:

> Hi Congbo,
> Thanks for your great work!
>
> Hi all,
> The PR of the 2.7.3 announcement blog has been merged but not shown on the
> Pulsar website. Will keep you updated once it is available on the website.
>
> On Wed, Aug 11, 2021 at 11:21 AM r...@apache.org 
> wrote:
>
>> Cool, thanks congbo
>>
>> --
>> Thanks
>> Xiaolong Ran
>>
>> 丛搏  于2021年8月11日周三 上午9:10写道:
>>
>> > The Apache Pulsar team is proud to announce Apache Pulsar version 2.7.3.
>> >
>> > Pulsar is a highly scalable, low latency messaging platform running on
>> > commodity hardware. It provides simple pub-sub semantics over topics,
>> > guaranteed at-least-once delivery of messages, automatic cursor
>> management
>> > for
>> > subscribers, and cross-datacenter replication.
>> >
>> > For Pulsar release details and downloads, visit:
>> >
>> > https://pulsar.apache.org/download
>> >
>> > Release Notes are at:
>> > http://pulsar.apache.org/release-notes
>> >
>> > We would like to thank the contributors that made the release possible.
>> >
>> > Regards,
>> >
>> > The Pulsar Team
>> >
>>
>


Re: [Discuss] Optimize the performance of creating Topic

2021-08-11 Thread Lin Lin



On 2021/08/03 11:12:34, Ivan Kelly  wrote: 
> > Creating a topic will first check whether the topic already exists.
> > The verification will read all topics under the namespace, and then
> > traverse these topics to see if the topic already exists.
> > When there are a large number of topics under the namespace(about 300,000
> > topics),
> > less than 10 topics can be created in one second.
> Why do we need to read all topics at all? We really just need to check
> whether TOPIC or TOPIC-partition-0 exist.
> 
> Even if they do not exist, is there anything to stop one client
> creating TOPIC and another creating TOPIC-partition-0?
> 
> -Ivan
> 

Such as the test case 
"testCreatePartitionedTopicHavingNonPartitionTopicWithPartitionSuffix". Some 
non partition topic has the partition suffix. In this case, we can not use the 
cache to check anymore. And we have to traverse