Please ignore my email below. The code WORKS!
I had deployed the wrong version on Spark.
Apologies for confusion.
-Sandeep
From: Sandeep Sagar
Date: Friday, October 25, 2019 at 5:37 PM
To: "dev@iceberg.apache.org"
Subject: Partition field not being utilized in query
Hi
Spark SQL query seems
Hi
Spark SQL query seems to be doing a table scan instead of utilizing partitions
in Iceberg.
I have created a partition using the spec as follows:
public PartitionSpec getPartitionSpec() {
PartitionSpec.Builder icebergBuilder = PartitionSpec.builderFor(getSchema());
icebergBuilder.hour(FIELD_N
One approach is to extend BaseTable and HadoopTableOperations classes.
public static class IcebergTable extends BaseTable {
private final IcebergTableOperations ops;
private IcebergTable(IcebergTableOperations ops, String name) {
super(ops, name);
this.ops = ops;
}
IcebergTableO
Hi everyone,
The IPMC vote for the Iceberg 0.7.0-incubating release passed!
I've pushed the source artifacts, a release tag, and released the artifacts
in Nexus. The binaries should be available in Maven central in the next day
or so. I'll update the web page and then we can send an announcement.
Hey Iceberg devs!
I've been following along the discussion about how the partition spec evolution
in Iceberg works, and recently I've been trying to implement this in some code
I've been writing. However, I've been trying to implement it using the
HadoopTables API and haven't been able to figure
Are there any plans to support Apache CarbonData as an integral file format
within Iceberg? I'd like to understand if such and integration would be
both feasible and complementary as the seems to be some intersection of
existing and planned features between the two projects.
Thanks for your time,