Hello,


Is there a Hive(or Spark dataframe) partitionBy equivalent in Flink?

I'm looking to save output as CSV files partitioned by two columns(date and
hour).

The partitionBy dataset API is more to partition the data based on a column
for further processing.



I'm thinking there is no direct API to do this. But what will be the best
way of achieving this?



Srikanth

Reply via email to