Hi
I'm maintaining the flink sink connector from apache iceberg community,
did your classpath include the correct iceberg-flink-runtime.jar ?
Pls following the steps here:
https://github.com/apache/iceberg/blob/master/site/docs/flink.md
Thanks.
On Wed, Oct 21, 2020 at 10:54 PM 18717838093 <187
Hi
According to my observation in the hbase community, there are still lots of
hbase users running their production cluster with version 1.x (1.4x or
1.5.x). so I'd like to suggest that
supporting both hbase1.x & hbase2.x connector.
Thanks.
On Sat, Jun 20, 2020 at 2:41 PM Ming Li wrote:
> +1 t
> If the call to mapResultToOutType(Result) finished without an error there
is no need to restart from the same row.
> The new scanner should start from the next row.
> Is that so or am I missing something?
Yeah, your are right. I've filed the issue
https://issues.apache.org/jira/browse/FLINK-1494
Hi
The Kafka table source & sink connector has been implemented (at least
flink1.9 support this), but the RocksDB connector
not support yet, you may need to implement it by yourself. Here[1] we have
a brief wiki to show what interfaces we need to implement,
but seems it's not detailed enough per
Hi Polarisary.
Checked the flink codebase and your stacktraces, seems you need to format
the timestamp as : "-MM-dd'T'HH:mm:ss.SSS'Z'"
The code is here:
https://github.com/apache/flink/blob/38e4e2b8f9bc63a793a2bddef5a578e3f80b7376/flink-formats/flink-json/src/main/java/org/apache/flink/forma