Hi,

I am trying to create partitioned iceberg table using scala code below based on 
example in docs.
df_c.writeTo(output_table)
  .partitionBy(days(col("last_updated")))
  .createOrReplace()
However, this code does not compile and throws two errors:

value partitionBy is not a member of 
org.apache.spark.sql.DataFrameWriterV2[org.apache.spark.sql.Row]
[error] possible cause: maybe a semicolon is missing before `value partitionBy'?
[error]       .partitionBy(days(col("last_updated")))
[error]        ^
[error]  not found: value days
[error]       .partitionBy(days(col("last_updated")))
[error]                    ^
[error] two errors found

Not sure where to look for a problem. Any advice appreciated.

Best regards,

Saulius Pakalka

Reply via email to