jamesge commented on issue #697: 客户端出现大量timeout
URL: https://github.com/apache/incubator-brpc/issues/697#issuecomment-477470737
你这个程序看上去不太正常,跑过asan么?你上面那个coredump不像是root cause,更像是内存写坏后的“受害者”
This is an automated message from t
cuisonghui commented on issue #697: 客户端出现大量timeout
URL: https://github.com/apache/incubator-brpc/issues/697#issuecomment-477464038

vars显示是正常的,但是只有点击
GardianT edited a comment on issue #649: 支持热点情况下的一致性hash?
URL: https://github.com/apache/incubator-brpc/issues/649#issuecomment-477451382
@tiankonguse 已经说的很清楚了。
1. 流量本身的key为url。
2.
流量爆发,热点不会固定的,数量也不会固定的。今天可能是test.com下爆发了大量的url,明天可能就是github.com下爆发了大量url。一致性hash下是路由节点,每个路由节点按实际访问量扩
GardianT edited a comment on issue #649: 支持热点情况下的一致性hash?
URL: https://github.com/apache/incubator-brpc/issues/649#issuecomment-477451382
@tiankonguse 已经说的很清楚了。
1. 流量本身的key为url。
2.
流量爆发,热点不会固定的,数量也不会固定的。今天可能是test.com下爆发了大量的url,明天可能就是github.com下爆发了大量url。一致性hash下是路由节点,每个路由节点按实际访问量扩
GardianT edited a comment on issue #649: 支持热点情况下的一致性hash?
URL: https://github.com/apache/incubator-brpc/issues/649#issuecomment-477451382
@tiankonguse 已经说的很清楚了。
1. 流量本身的key为url。
2.
流量爆发,热点不会固定的,数量也不会固定的。今天可能是test.com下爆发了大量的url,明天可能就是github.com下爆发了大量url。一致性hash下是路由节点,每个路由节点按实际访问量扩
GardianT commented on issue #649: 支持热点情况下的一致性hash?
URL: https://github.com/apache/incubator-brpc/issues/649#issuecomment-477451382
@tiankonguse 已经说的很清楚了。
1. 流量本身的key为url。
2.
流量爆发,热点不会固定的,数量也不会固定的。今天可能是test.com下爆发了大量的url,明天可能就是github.com下爆发了大量url。一致性hash下是路由节点,每个路由节点按实际访问量扩容。其实这个就
tiankonguse edited a comment on issue #649: 支持热点情况下的一致性hash?
URL: https://github.com/apache/incubator-brpc/issues/649#issuecomment-477448587
>
(流量本身的)key为url,后端存储按照这个url全局有序。流量的(写入)存储过程中牵扯若干的词典,词典可能以domain,或者site为单位。所以server如果以这样的url为range做sharding条件,可以比较有效的提升词典cache的命中率。比如一个server
a,处理的ra
tiankonguse commented on issue #649: 支持热点情况下的一致性hash?
URL: https://github.com/apache/incubator-brpc/issues/649#issuecomment-477448587
>
(流量本身的)key为url,后端存储按照这个url全局有序。流量的(写入)存储过程中牵扯若干的词典,词典可能以domain,或者site为单位。所以server如果以这样的url为range做sharding条件,可以比较有效的提升词典cache的命中率。比如一个server
a,处理的range是 [a
zyearn commented on issue #516: h2c/grpc遗留问题
URL: https://github.com/apache/incubator-brpc/issues/516#issuecomment-477436581
@Xavier1994 调到最大可以让这个问题出现的概率非常小,不过也就失去了流控的作用了,如果你们的场景是客户端流控不太重要,可以调大
This is an automated message fro
phye commented on issue #705: Received request 慢问题
URL: https://github.com/apache/incubator-brpc/issues/705#issuecomment-477418824
我不是很明白,为什么拥塞严重了bthread_worker_usage只有50%呢?既然worker充足,应该不至于收包慢?
This is an automated message fro
hairet closed issue #706: 关于signal_task里的ParkingLot _pending_signal32位可能溢出问题
URL: https://github.com/apache/incubator-brpc/issues/706
This is an automated message from the Apache Git Service.
To respond to the message, please
hairet commented on issue #706: 关于signal_task里的ParkingLot
_pending_signal32位可能溢出问题
URL: https://github.com/apache/incubator-brpc/issues/706#issuecomment-477417124
已解决
This is an automated message from the Apache Git Service.
hairet commented on issue #706: 关于signal_task里的ParkingLot
_pending_signal32位可能溢出问题
URL: https://github.com/apache/incubator-brpc/issues/706#issuecomment-477417055
哦,我看漏了,这里fetch_add的时候一定是num_task << 1的,可以循环使用
This is an autom
Thanks,zhangyi,
Anyone else who want to volunteer to take some task?
在 2019/3/27 下午10:17,“Zhangyi Chen” 写入:
I would like take 1 and 2.
On Tue, Mar 26, 2019 at 2:25 PM tan zhongyi wrote:
> Hi, all,
>
> We need to assign task for our first release .
> Anyone who
Hi, James,
How about this suggestion?
Using apr's time instead of nspr's time
Thanks'
在 2019/3/28 上午2:23,“Dave Fisher” 写入:
Hi -
Have a look at the Apache Portable Runtime’s time routines.
https://apr.apache.org/docs/apr/1.5/group__apr__time.html
Regards,
Dav
Hi -
Have a look at the Apache Portable Runtime’s time routines.
https://apr.apache.org/docs/apr/1.5/group__apr__time.html
Regards,
Dave
Sent from my iPhone
> On Mar 27, 2019, at 1:55 AM, tan zhongyi wrote:
>
> Hi£¬ guys,
>
> Here is one issue that may block our first apache release
>
> I
I would like take 1 and 2.
On Tue, Mar 26, 2019 at 2:25 PM tan zhongyi wrote:
> Hi, all,
>
> We need to assign task for our first release .
> Anyone who volunteer to take ?
> Here is the task list, you can reply with it with your interested task,
> Thanks
>
>
> Here is the task list:
>
jamesge commented on issue #704: [grpc-java-client] grpc java client call brpc
server get INTERNAL: HTTP/2 error code: FLOW_CONTROL_ERROR
URL: https://github.com/apache/incubator-brpc/issues/704#issuecomment-477153899
Did the RPC succeed when there's only one thread?
-
jamesge commented on issue #705: Received request 慢问题
URL: https://github.com/apache/incubator-brpc/issues/705#issuecomment-477152424
你这服务已经严重拥塞了,先寻找一下拥塞原因吧,如锁,资源限制etc
This is an automated message from the Apache Git Service.
hairet opened a new issue #706: 关于signal_task里的ParkingLot
_pending_signal32位可能溢出问题
URL: https://github.com/apache/incubator-brpc/issues/706
**Describe the bug (描述bug)**
最近需要对bthread调度做一些定制化,然后看到了ParkingLot的_pending_signal只有32位,虽然TaskControl里有4个ParkingLot,那是不是可以理解成只要有4*2^31的bthread创建+sig
TousakaRin commented on a change in pull request #694: Health check by rpc call
URL: https://github.com/apache/incubator-brpc/pull/694#discussion_r269543046
##
File path: src/brpc/details/health_check.cpp
##
@@ -0,0 +1,246 @@
+// Copyright (c) 2014 Baidu, Inc.
+//
+// Lice
TousakaRin commented on a change in pull request #694: Health check by rpc call
URL: https://github.com/apache/incubator-brpc/pull/694#discussion_r269541108
##
File path: src/brpc/controller.cpp
##
@@ -996,7 +996,7 @@ void Controller::IssueRPC(int64_t start_realtime_us) {
cuisonghui commented on issue #697: 客户端出现大量timeout
URL: https://github.com/apache/incubator-brpc/issues/697#issuecomment-477115220
目前已经定位到访问vars core dump的问题.
远小于
bthread_worker_count(24),process_cpu_usage
11.7左右,也远小于system_core_count(48),但是此时却发现server端QPS无法进一步提升,查看rpcz发现请求收的很慢。想问一下这样现象的
Xavier1994 commented on issue #516: h2c/grpc遗留问题
URL: https://github.com/apache/incubator-brpc/issues/516#issuecomment-477100782
@jamesge 我遇到了上面说的 "当h2response的包体长度小于remote window
size时,当前的做法是直接返回RST_STREAM,更好的做法是等到收到window_update再发,just like tcp"这个问题,
这个问题可以grpc-java客户端这边通过调大window size来解
zyearn commented on a change in pull request #694: Health check by rpc call
URL: https://github.com/apache/incubator-brpc/pull/694#discussion_r269507489
##
File path: src/brpc/socket.cpp
##
@@ -881,7 +861,7 @@ int Socket::SetFailed(int error_code, const char*
error_fmt, ..
jamesge commented on a change in pull request #694: Health check by rpc call
URL: https://github.com/apache/incubator-brpc/pull/694#discussion_r269483082
##
File path: src/brpc/socket.cpp
##
@@ -881,7 +861,7 @@ int Socket::SetFailed(int error_code, const char*
error_fmt, .
Xavier1994 opened a new issue #704: [grpc-java-client] grpc java client call
brpc server get INTERNAL: HTTP/2 error code: FLOW_CONTROL_ERROR
URL: https://github.com/apache/incubator-brpc/issues/704
**Describe the bug (描述bug)**
I use a grpc-java-client to call brpc server, And I just use
Hi, guys,
Here is one issue that may block our first apache release
I checked the code, notice we are using nspr,
/src/butil/third_party/nspr/prtime.h
/src/butil/third_party/nspr/prtime.cc
/src/butil/third_party/nspr/LICENSE
From their HEAD, we know that
They are Dual-license, GPL-2.0+, LGPL-2
GardianT commented on issue #649: 支持热点情况下的一致性hash?
URL: https://github.com/apache/incubator-brpc/issues/649#issuecomment-477010336
@jamesge
顺便多问下:这种场景下有机会或者比较成熟的方案 让server实现自适应的负载均衡么?
比如redis有slot的概念,server节点之间会通信每个server负责的slot信息。有节点失效的时候也重分配对应的slot,这样的。
raylinwu opened a new issue #703: 使用bRPC client无法启动dummy server
URL: https://github.com/apache/incubator-brpc/issues/703
**Describe the bug (描述bug)**
server中使用bRPC 跟其他gRPC服务通信,在启动的bin目录
echo 8090 > dummy_server.port
netstat -anlp|grep 8090 没有端口监听
**To Reproduce (复现方法)*
GardianT commented on issue #649: 支持热点情况下的一致性hash?
URL: https://github.com/apache/incubator-brpc/issues/649#issuecomment-477005672
> @GardianT
你说的这种情况不是一致性哈希能解决的。用到一致性哈希的场景的前提之一就是server加载什么数据由client决定,但这里server加载什么是由自己决定的,类型场景还有机器学习model
serving。这种场景中一个记录什么数据在哪里的高可用模块是无法绕开的,经常叫Meta Server或
32 matches
Mail list logo