[
https://issues.apache.org/jira/browse/BEAM-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17512006#comment-17512006
]
Rahul Iyer commented on BEAM-14146:
-----------------------------------
{quote}
Are you able to unblock by using STEAMING_INSERTS method ?
{quote}
No, we want to use
{code}
"schemaUpdateOptions": ["ALLOW_FIELD_ADDITION", "ALLOW_FIELD_RELAXATION"],
{code}
and I believe this is not supported while using {{STEAMING_INSERTS}}.
{quote}
Also, does this occur consistently ?
{quote}
I believe so yeah.
> Python Streaming job failing to drain with BigQueryIO write errors
> ------------------------------------------------------------------
>
> Key: BEAM-14146
> URL: https://issues.apache.org/jira/browse/BEAM-14146
> Project: Beam
> Issue Type: Bug
> Components: io-py-gcp, sdk-py-core
> Affects Versions: 2.37.0
> Reporter: Rahul Iyer
> Priority: P2
>
> We have a Python Streaming Dataflow job that writes to BigQuery using the
> {{FILE_LOADS}} method and {{auto_sharding}} enabled. When we try to drain the
> job it fails with the following error,
> {code:python}
> "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/bigquery_tools.py",
> line 1000, in perform_load_job ValueError: Either a non-empty list of
> fully-qualified source URIs must be provided via the source_uris parameter or
> an open file object must be provided via the source_stream parameter.
> {code}
> Our {{WriteToBigQuery}} configuration,
> {code:python}
> beam.io.WriteToBigQuery(
> table=options.output_table,
> schema=bq_schema,
> create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
> write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
> insert_retry_strategy=RetryStrategy.RETRY_ON_TRANSIENT_ERROR,
> method=beam.io.WriteToBigQuery.Method.FILE_LOADS,
> additional_bq_parameters={
> "timePartitioning": {
> "type": "HOUR",
> "field": "bq_insert_timestamp",
> },
> "schemaUpdateOptions": ["ALLOW_FIELD_ADDITION", "ALLOW_FIELD_RELAXATION"],
> },
> triggering_frequency=120,
> with_auto_sharding=True,
> )
> {code}
> We are also noticing that the job only fails to drain when there are actual
> schema updates. If there are no schema updates the job drains without the
> above error.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)