Sorry for the late reply. There is no public API for updating a table's partition spec right now. Tables can handle multiple specs, but you would have to use an approach like Thippana suggested.
We also have a few internal things that we need to fix to avoid collisions in metadata tables. Right now, we assign the same IDs to partition data fields in manifest files. If you tried to read the data files metadata table with conflicting specs, then you would get the data from both specs mixed together, which could be a problem with result different partition value types or a different set of partition fields. This is one reason why we haven't released an API for changing the spec yet. We also don't quite know what the API will look like. It would be great to hear proposals from the community for this! rb On Fri, Oct 25, 2019 at 4:46 PM Thippana Vamsi Kalyan <va...@dremio.com> wrote: > One approach is to extend BaseTable and HadoopTableOperations classes. > > > public static class IcebergTable extends BaseTable { > private final IcebergTableOperations ops; > > private IcebergTable(IcebergTableOperations ops, String name) { > super(ops, name); > this.ops = ops; > } > > IcebergTableOperations ops() { > return ops; > } > } > > > public class IcebergTableOperations extends HadoopTableOperations { > public IcebergTableOperations(Path location, Configuration conf) { > super(location, conf); > } > } > > > Wrapper function can be written to create an Iceberg table using above > extended classes. > > Once we have instances of above classes pointing to Iceberg table, > following should work > > > IcebergTable table = createIcebergTableWrapper(SCHEMA, spec, > tableLocation); > > TableMetadata base = table.ops().current(); > > table.ops().commit(base, base.updatePartitionSpec(newSpec)); > > > Note that method, table.ops() is available on specialized IcebergTable > class. > > > > On Fri, Oct 25, 2019 at 9:51 PM Christine Mathiesen < > t-cmathie...@hotels.com> wrote: > >> Hey Iceberg devs! >> >> I've been following along the discussion about how the partition spec >> evolution in Iceberg works, and recently I've been trying to implement this >> in some code I've been writing. However, I've been trying to implement it >> using the HadoopTables API and haven't been able to figure it out. >> >> From what I've been reading I would expect this operation to look >> something like: >> >> Table table = tables.create(SCHEMA, spec, >> tableLocation); >> >> TableMetadata base = table.operations.current(); >> >> base.updatePartitionSpec(newSpec); >> >> table.refresh(); >> >> However, I'm not finding a way of accessing and modifying this table >> metadata when trying to use HadoopTables. Schema evolution has a nice >> UpdateSchema class for this, but am I missing something for the partition >> evolution side? >> >> Would anyone be able to point me in the right direction? >> >> Thank you! >> >> >> >> >> >> *Christine Mathiesen * >> >> Software Development Intern >> >> BDP – Hotels.com >> >> Expedia Group >> >> >> > > > -- > Best regards > T.Vamsi Kalyan > +91-94905 56669 > -- Ryan Blue Software Engineer Netflix