ehurheap commented on issue #9807:
URL: https://github.com/apache/hudi/issues/9807#issuecomment-1745464144
Yes. I can write a dataframe to the same table, for example:
```
data.write
.format("org.apache.hudi.Spark32PlusDefaultSource")
.options(writeWithLocking)
.mode("append")
.save(tablePath)
```
where writeWithLocking options are:
```
(hoodie.bulkinsert.shuffle.parallelism,2)
(hoodie.bulkinsert.sort.mode,NONE)
(hoodie.clean.async,false)
(hoodie.clean.automatic,false)
(hoodie.cleaner.policy.failed.writes,LAZY)
(hoodie.combine.before.insert,false)
(hoodie.compact.inline,false)
(hoodie.compact.schedule.inline,false)
(hoodie.datasource.compaction.async.enable,false)
(hoodie.datasource.write.hive_style_partitioning,true)
(hoodie.datasource.write.keygenerator.class,org.apache.spark.sql.hudi.command.UuidKeyGenerator)
(hoodie.datasource.write.operation,bulk_insert)
(hoodie.datasource.write.partitionpath.field,env_id,week)
(hoodie.datasource.write.precombine.field,schematized_at)
(hoodie.datasource.write.recordkey.field,env_id,user_id)
(hoodie.datasource.write.row.writer.enable,false)
(hoodie.datasource.write.table.type,MERGE_ON_READ)
(hoodie.metadata.enable,false)
(hoodie.table.name,users_changes)
(hoodie.write.concurrency.mode,OPTIMISTIC_CONCURRENCY_CONTROL)
(hoodie.write.lock.dynamodb.endpoint_url,http://localhost:8000)
(hoodie.write.lock.dynamodb.partition_key,users_changes-us-east-1-local)
(hoodie.write.lock.dynamodb.region,us-east-1)
(hoodie.write.lock.dynamodb.table,datalake-locks)
(hoodie.write.lock.provider,org.apache.hudi.aws.transaction.lock.DynamoDBBasedLockProvider)
```
These locking configs are also in our production ingestion which writes to
hudi using spark structured streaming without error.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]