Hi Caizhi,
thank you for your response, the full exception is the following:
Exception in thread "main" org.apache.flink.table.api.TableException: Arity
[7] of result [ArrayBuffer(String, String, String, String, String, String,
Timestamp)] does not match the number[1] of requested type
[GenericTy
Hi Federico,
I can't reproduce the error in my local environment. Would you mind sharing
us your code and the full exception stack trace? This will help us diagnose
the problem. Thanks.
Federico D'Ambrosio 于2019年7月24日周三 下午5:45写道:
> Hi Caizhi,
>
> thank you for your response.
>
> 1) I see, I'll
Hi Caizhi,
thank you for your response.
1) I see, I'll use a compatible string format
2) I'm defining the case class like this:
case class cEvent(state: String, id: String, device: String,
instance: String, subInstance: String, groupLabel:
String, time: Timestamp)
object cEve
Hi Federico,
1) As far as I know, you can't set a format for timestamp parsing currently
(see `SqlTimestampParser`, it just feeds your string to
`SqlTimestamp.valueOf`, so your timestamp format must be compatible with
SqlTimestamp).
2) How do you define your case class? You have to define its par
Hello everyone,
I've always used the DataStream API and now I'm trying out the Table API to
create a datastream from a CSV and I'm finding a couple of issues:
1) I'm reading a csv with 7 total fields, the 7th of which is a date
serialized as a Spark TimestampType, written on the csv like this:
20