[GitHub] [pulsar-manager] meimeitou commented on issue #465: Docker Failed to Initialize Postgresql Database

2022-06-24 Thread GitBox


meimeitou commented on issue #465:
URL: https://github.com/apache/pulsar-manager/issues/465#issuecomment-1165315126

   same problem


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [pulsar-manager] meimeitou commented on issue #465: Docker Failed to Initialize Postgresql Database

2022-06-24 Thread GitBox


meimeitou commented on issue #465:
URL: https://github.com/apache/pulsar-manager/issues/465#issuecomment-1165315912

   ```
   Starting PostGreSQL Server
   Adding group `pulsar' (GID 1000) ...
   Done.
   Adding user `pulsar' ...
   Adding new user `pulsar' (1000) with group `pulsar' ...
   Creating home directory `/home/pulsar' ...
   Copying files from `/etc/skel' ...
   Changing the user information for pulsar
   Enter the new value, or press ENTER for the default
   Use of uninitialized value $answer in chop at /usr/sbin/adduser line 582.
   Use of uninitialized value $answer in pattern match (m//) at 
/usr/sbin/adduser line 583.
   /pulsar-manager/startup.sh: 21: initdb: not found
   /pulsar-manager/startup.sh: 22: pg_ctl: not found
   createdb: error: could not connect to database template1: could not connect 
to server: No such file or directory
   Is the server running locally and accepting
   connections on Unix domain socket 
"/var/run/postgresql/.s.PGSQL.5432"?
   psql: error: could not connect to server: No such file or directory
   Is the server running locally and accepting
   connections on Unix domain socket 
"/var/run/postgresql/.s.PGSQL.5432"?
   Full Name []:   Room Number []: Work Phone []:  Home Phone 
[]:  Other []: Is the information correct? [Y/n] Starting Pulsar Manager Front 
end
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [pulsar-site] urfreespace merged pull request #122: update build script

2022-06-24 Thread GitBox


urfreespace merged PR #122:
URL: https://github.com/apache/pulsar-site/pull/122


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [pulsar-manager] yuvalgut commented on issue #465: Docker Failed to Initialize Postgresql Database

2022-06-24 Thread GitBox


yuvalgut commented on issue #465:
URL: https://github.com/apache/pulsar-manager/issues/465#issuecomment-1165373182

   Same here, I think its ever since this 
https://github.com/apache/pulsar-manager/commit/46f9e0a34960d9a775b4cea1515daafe621a21e4
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@pulsar.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] PIP-172: Introduce the HEALTH_CHECK command in the binary protocol

2022-06-24 Thread Cong Zhao
Hi Michael,

> I think the current PIP might need some clarification on how errors
> are handled. For example, if a single broker fails to respond because
> it was being restarted, how would the client handle that kind of
> failure with this feature?

I think don't need to handle it, because the client API should be consistent 
with the `HEALTH_CHECK` command result, and users can retry it if they need.

> I wasn't suggesting that the client would need to ask the broker for
> each of the producers/consumers, but rather that the client would
> monitor producers/consumers locally and make decisions about cluster
> health. For example, if a producer cannot connect to its target topic
> after some amount of time or some number of retries, or if a producer
> can connect but cannot publish a message successfully within some
> amount of time, then the client could consider the cluster to be
> unhealthy.
> 
> > This proposal mainly provides a means to check whether there is available 
> > topic in the cluster, and I think this is meaningful in most cases.
> 
> The client will discover if one of its targeted topics is unavailable,
> so instead of monitoring the broker's health check topic, I think the
> client should monitor/failover when a targeted topic is "unavailable"
> for some configured length of time.
> 
> I support making the auto-failover logic more robust, but I don't
> think the broker health check is the right signal to use for overall
> cluster health. In my view, the broker's health check is meant to
> signal to orchestrators (like Kubernetes) when a broker ought to be
> restarted.

For the currently connected cluster, we really can't think the current topic is 
unavailable just because the `HEALTH_CHECK` command result is unhealthy, the 
current means for auto-failover are relatively rude. I think we improve it by 
adding extra measures such as mentioned above to you, but this doesn't fall 
within the scope of the proposal.  

Also, for the main problem that this proposal wants to solve (how to check a 
new cluster is healthy). Do you have another better idea?

Thanks,
Cong Zhao

On 2022/06/24 05:25:46 Michael Marshall wrote:
> Thanks for your replies Cong Zhao.
> 
> I think the current PIP might need some clarification on how errors
> are handled. For example, if a single broker fails to respond because
> it was being restarted, how would the client handle that kind of
> failure with this feature?
> 
> > This is a good definition of cluster health, but we can't check all topics 
> > that would add a lot of load on cleint and broker.
> 
> I wasn't suggesting that the client would need to ask the broker for
> each of the producers/consumers, but rather that the client would
> monitor producers/consumers locally and make decisions about cluster
> health. For example, if a producer cannot connect to its target topic
> after some amount of time or some number of retries, or if a producer
> can connect but cannot publish a message successfully within some
> amount of time, then the client could consider the cluster to be
> unhealthy.
> 
> > This proposal mainly provides a means to check whether there is available 
> > topic in the cluster, and I think this is meaningful in most cases.
> 
> The client will discover if one of its targeted topics is unavailable,
> so instead of monitoring the broker's health check topic, I think the
> client should monitor/failover when a targeted topic is "unavailable"
> for some configured length of time.
> 
> I support making the auto-failover logic more robust, but I don't
> think the broker health check is the right signal to use for overall
> cluster health. In my view, the broker's health check is meant to
> signal to orchestrators (like Kubernetes) when a broker ought to be
> restarted.
> 
> Thanks,
> Michael
> 
> 
> On Thu, Jun 23, 2022 at 12:35 AM Cong Zhao  wrote:
> >
> > Hi Michael,
> >
> > Thanks for your feedback.
> >
> > > I define a client's primary cluster as "healthy" when it is "healthy"
> > for all of its producers and consumers. I define a healthy producer as
> > one that can connect to a topic and publish messages within certain
> > latency and throughput thresholds (configured by the user), and I
> > define a healthy consumer as one that can connect to a topic and
> > consume messages when there are messages to be consumed (possibly
> > within a certain latency?).
> >
> > This is a good definition of cluster health, but we can't check all topics 
> > that would add a lot of load on cleint and broker.
> >
> > > By the above definitions, I don't think the broker's health check will
> > give us the right notion of "healthy" because that health check
> > monitors producing/consuming to/from the health check topic, not the
> > client's target topics. One primary difference is that a health check
> > topic could have a different persistence policy, which means the
> > client could incorrectly classify the broker as healthy when there
> > aren't enough avail

Any sign of 2.8.4?

2022-06-24 Thread Frank Kelly
Hi All,
Last email I saw of a 2.8.4 release was May 31st is there still a
possibility or are folks working on 2.9.3 and upwards?

Thanks!

-Frank

-- 

Frank Kelly | Principal Engineer, Platform Team

100 High Street, 7th Floor, Boston, MA 02110 USA

www.cogitocorp.com | Cogito on LinkedIn


Confidentiality Notice : This email is the property of Cogito Corporation.
This message, including any attachments, is for the sole use of the
intended recipient(s) and may contain confidential and/or privileged
information. Any unauthorized review, use, disclosure or distribution is
prohibited. If you are not the intended recipient, please contact the
sender by reply email and destroy all copies of the original message.


[GitHub] [pulsar-test-infra] dependabot[bot] opened a new pull request, #51: Bump jsdom from 16.2.2 to 16.7.0 in /paths-filter

2022-06-24 Thread GitBox


dependabot[bot] opened a new pull request, #51:
URL: https://github.com/apache/pulsar-test-infra/pull/51

   Bumps [jsdom](https://github.com/jsdom/jsdom) from 16.2.2 to 16.7.0.
   
   Release notes
   Sourced from https://github.com/jsdom/jsdom/releases";>jsdom's releases.
   
   Version 16.7.0
   
   Added AbortSignal.abort(). (ninevra)
   Added dummy x and y properties to the return 
value of getBoundingClientRect(). (eiko)
   Implemented wrapping for textareaEl.value if the 
wrap="" attribute is specified. (ninevra)
   Changed newline normalization in