+1 (non binding)
* Checked Signature Keys
* Verified Checksum
* Rat checks
* Build and run tests, most functionality pass (also timeout errors on
Hive-MR)
Thanks
Szehon
On Tue, Aug 10, 2021 at 1:40 AM Ryan Murray wrote:
> +1 (non-binding)
>
> * Verify Signature Keys
> * Verify Checksum
> * dev
Thanks Russell.
I tried:
/spark/bin/spark-shell --packages
org.apache.iceberg:iceberg-hive-runtime:0.11.1,org.apache.iceberg:iceberg-spark3-runtime:0.11.1
--conf spark.sql.catalog.hive_test=org.apache.iceberg.spark.SparkCatalog
--conf spark.sql.catalog.hive_test.type=hive
import org.apache.spark
Specify a property of "location" when creating the table. Just add a
".option("location", "path")"
> On Aug 10, 2021, at 11:15 AM, Lian Jiang wrote:
>
> Thanks Russell. This helps a lot.
>
> I want to specify a HDFS location when creating an iceberg dataset using
> dataframe api. All examples
Thanks Russell. This helps a lot.
I want to specify a HDFS location when creating an iceberg dataset using
dataframe api. All examples using warehouse location are SQL. Do you have
an example for dataframe API? For example, how to support HDFS/S3 location
in the query below? The reason I ask is th
+1 (non-binding)
* Verify Signature Keys
* Verify Checksum
* dev/check-license
* Build
* Run tests (though some timeout failures, on Hive MR test..)
* ran with Nessie in spark 3.1 and 3.0
On Tue, Aug 10, 2021 at 4:21 AM Carl Steinbach wrote:
> Hi Everyone,
>
> I propose the following RC to be r