zhangyachen opened a new issue, #2657: URL: https://github.com/apache/brpc/issues/2657
**Is your feature request related to a problem? (你需要的功能是否与某个问题有关?)** 我正在从grpc切换到brpc框架,想请教一下brpc异步server的使用方式问题。 ### 背景: 在我们需要实现rpc server接口中,会调用一个异步接口async(),调用后立即返回。 当async背后的逻辑处理完成,会调用提前设置好的callback。从调用async到再调用callback大概10ms。 **Describe the solution you'd like (描述你期望的解决方法)** 在之前grpc中使用的异步处理,利用ServerCompletionQueue结构。 我想请教一下,在brpc中,如果使用异步接口,内部再调用另外一个异步接口async,有什么最佳实践吗? **我想到的是在调用async后使用future.get(),在callback中set_value。不知道这种方式在brpc异步接口中是否会有什么性能问题和额外考虑的问题?** 谢谢。 **Describe alternatives you've considered (描述你想到的折衷方案)** **Additional context/screenshots (更多上下文/截图)** 打算实现的简单伪代码,主要逻辑在AsyncInferJob::run(): ``` struct AsyncInferJob { brpc::Controller* cntl; const helloworld::HelloRequest* request; helloworld::HelloReply* response; google::protobuf::Closure* done; TRITONSERVER_Server* server; TRITONSERVER_ResponseAllocator* allocator; void run(); void run_and_delete() { run(); delete this; } }; static void* process_thread(void* args) { AsyncInferJob* job = static_cast<AsyncInferJob*>(args); job->run_and_delete(); return NULL; } class GreeterServiceImpl : public helloworld::Greeter { public: GreeterServiceImpl(TRITONSERVER_Server* server, TRITONSERVER_ResponseAllocator* allocator) : server_(server), allocator_(allocator) {} void SayHello(google::protobuf::RpcController* cntl_base, const helloworld::HelloRequest* request, helloworld::HelloReply* response, google::protobuf::Closure* done) override { brpc::ClosureGuard done_guard(done); brpc::Controller* cntl = static_cast<brpc::Controller*>(cntl_base); // Process the request asynchronously. AsyncInferJob* job = new AsyncInferJob; job->cntl = cntl; job->request = request; job->response = response; job->done = done; job->server = server_; job->allocator = allocator_; bthread_t th; CHECK_EQ(0, bthread_start_background(&th, NULL, process_thread, job)); // We don't want to call done->Run() here, release the guard. done_guard.release(); } private: TRITONSERVER_Server* server_; TRITONSERVER_ResponseAllocator* allocator_; }; void AsyncInferJob::run() { brpc::ClosureGuard done_guard(done); // 1. 使用promise同步response auto promise = new std::promise<TRITONSERVER_InferenceResponse*>(); std::future<TRITONSERVER_InferenceResponse*> future = promise->get_future(); // 2. 设置callback TRITONSERVER_InferenceRequestSetResponseCallback( irequest, allocator, nullptr, [](TRITONSERVER_InferenceResponse* response, const uint32_t flags, void* userp) { auto p = reinterpret_cast<std::promise<TRITONSERVER_InferenceResponse*>*>(userp); **p->set_value(response);** }, reinterpret_cast<void*>(promise)); // 3. 调用异步接口 TRITONSERVER_ServerInferAsync(server, irequest, nullptr); // 4. 等待response TRITONSERVER_InferenceResponse* completed_response = future.get(); } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@brpc.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@brpc.apache.org For additional commands, e-mail: dev-h...@brpc.apache.org