Found some more info, sorry for the chopping.

Are you using *bigqueryio* or *bigquery_tools* somehow?
If so, biguquery_tools defines a histogram using 20 buckets of 3 seconds
each to export latencies (see
https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery_tools.py#L347-L351
).

Besides having requests with latency >60s not properly recorded (and
warning logs), this shouldn't have any impact.


On Fri, Mar 3, 2023 at 1:31 PM Bruno Volpato <bvolp...@google.com> wrote:

> Hi Nick,
>
> This seems to come from utils/histogram.py
> <https://github.com/apache/beam/blob/master/sdks/python/apache_beam/utils/histogram.py#L75-L89>.
> Any chance that you are initializing it in a way that defines bounds up to
> 60,000 but invoking record() with out of bounds value?
>
> Best,
> Bruno
>
> On Fri, Mar 3, 2023 at 1:26 PM Nick Edwards <nwedweards....@gmail.com>
> wrote:
>
>> Hello! I’m investigating a warning we’re receiving through Google Cloud’s
>> Monitoring. The warning reads "record is out of upper bound 60000: " and
>> then includes a 5 or 6 digit integer. I’m having trouble finding any info
>> on this sort of behavior. Any insight or assistance you can offer is
>> greatly appreciated!
>>
>>
>> [image: image.png]
>>
>>
>>
>>
>

Reply via email to