LGTM. We can remove the `google_` prefix of the configuration as the
plugin name already explained it.

JinChao Shuai <[email protected]> 于2021年11月11日周四 上午11:42写道:
>
> ----
> Configuration:
>
> {
>     "google_auth_config": {
>         "private_key": "***"
>         "client_email": "***"
>         "project_id": "***"
>         "token_uri": "***"
>     },
>     "google_auth_file": "/path/to/google-service-account.json",
>     "resource": {
>         "type": "global"
>     },
>     "log_id": "cloudresourcemanager.googleapis.com%2Factivity",
>     "batch_max_size": 200,
>     "batch_max_size": 10,
> }
>
> // google_auth_config              // the google service account
> config(Semi-optional, one of `google_auth_config` or `google_auth_file`
> must be configured)
> // google_auth_config.private_key  // the private key parameters of the
> Google service account
> // google_auth_config.client_email // the client email parameters of the
> Google service account
> // google_auth_config.project_id   // the project id parameters of the
> Google service account
> // google_auth_config.token_uri    // the token uri parameters of the
> Google service account
> // google_auth_file                // path to the google service account
> json file(Semi-optional, one of `google_auth_config` or `google_auth_file`
> must be configured)
> // resource                        // the Google monitor resource, refer
> to:
> https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource
> // log_id                          // google logging id, refer to:
> https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
> // batch_max_size                  // batch queue max number of bearer
> entries
> // batch_timeout                   // batch queue timeout of bearer entries
>
>
> ----
> Details:
>
> 1. Obtain and assemble request information in the APISIX Log stage
> 2. To interact with google logging service for the first time, you need to
> request token information. After obtaining the token, it will be cached in
> the memory of the working node.
> 3. After obtaining a valid token, put the request information into the
> batch processing queue. When the batch processing queue triggers the
> batch_max_size or batch_timeout threshold, the data in the queue is
> synchronized to the google cloud logging service
> 4. Before each request is sent, check whether the token is about to time
> out, and refresh the token if it will time out.
>
> Zhiyuan Ju <[email protected]> 于2021年11月11日周四 上午10:01写道:
>
> > Yes, kindly let us know when the proposal details are updated :)
> >
> > Best Regards!
> > @ Zhiyuan Ju <https://github.com/juzhiyuan>
> >
> >
> > Zexuan Luo <[email protected]> 于2021年11月10日周三 下午6:30写道:
> >
> > > Yes. But where are the configuration and technology details of the
> > > feature? It seems like a feature request instead of a proposal.
> > >
> > > JinChao Shuai <[email protected]> 于2021年11月10日周三 下午5:03写道:
> > > >
> > > > Hi, community.
> > > >
> > > > Currently, the cloud service provider that APISIX stores analysis logs
> > > only
> > > > supports Alibaba Cloud.
> > > >
> > > > As one of the world's largest cloud service providers, Google has a
> > very
> > > > large user base.
> > > >
> > > > Therefore, I propose that APISIX supports the synchronization of logs
> > to
> > > > Google Cloud Logging in the form of plug-ins, which can not only meet
> > the
> > > > diverse log storage and analysis needs of users, but also enrich the
> > > > surrounding ecology of APISIX.
> > > >
> > > >
> > > >
> > > > This is my suggestion [1], developers are welcome
> > > > Join the discussion about its suggestions and questions.
> > > >
> > > > [1] https://github.com/apache/apisix/issues/5474
> > > >
> > > > --
> > > > Thanks,
> > > > Janko
> > >
> >
>
>
> --
> Thanks,
> Janko

Reply via email to