pan3793 commented on a change in pull request #4694: URL: https://github.com/apache/hudi/pull/4694#discussion_r793532000
########## File path: website/releases/release-0.10.1.md ########## @@ -0,0 +1,64 @@ +--- +title: "Release 0.10.1" +sidebar_position: 2 +layout: releases +toc: true +last_modified_at: 2022-01-26T22:07:00+08:00 +--- +# [Release 0.10.1](https://github.com/apache/hudi/releases/tag/release-0.10.1) ([docs](/docs/quick-start-guide)) + +## Migration Guide + +* This release (0.10.1) does not introduce any new table version, hence no migration needed if you are on 0.10.0. +* If migrating from an older release, please check the migration guide from the previous release notes, specifically the upgrade instructions in 0.6.0, 0.9.0 and 0.10.0. + +## Release Highlights + +### Explicit Spark 3 bundle names + +In the previous release (0.10.0), we added Spark 3.1.x support and made it the default Spark 3 version to build with. In 0.10.1, +we made the Spark 3 version explicit in the bundle name and published a new bundle for Spark 3.0.x. Specifically, these 2 bundles +are available in the public maven repository. + +* `hudi-spark3.1.2-bundle_2.12-0.10.1.jar` +* `hudi-spark3.0.3-bundle_2.12-0.10.1.jar` Review comment: I did not participate in the previous discussion, but it looks like a little bit overkill, from the user aspect, just like me, it's confusing if Hudi only supports the **exact** patched version of Spark or not? If yes, it would be too strict to users. Usually, Spark keeps good API stability across the patched version, does the `major.minor` version suffix, e.g. `*-spark3.0-*`, `*-spark3.1-*`, is sufficient? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org