RocksDB is orginally implemented in C. To use it in Java, Flink uses a RocksDB
jar with JNI techniques.
Whenever a Flink job starts, the RocksDB libraries will be loaded. The file
named librocksdb*.so will be extracted from the RocksDB jar and will be put in
the tmp directory. So I think you
Hi @ll,
we just change the backend from filesystem to RocksDB. Since that time (3 Days)
we got 451 files with 1.8 GB stored in the tmp-Directory. All files are named
librocksdb*.so
Did we something wrong or is it a Bug?
GreetsDominique
Von meinem Samsung Gerät gesendet.
Hello there,
I have a requirement to reset variable after every-time window in map
function. Once I set a value to the variable it continues through the next
time window also. I would want the variable to have the same original value
on every next time window.
Eg:
DataStream> grid = Y.map(new
AddC
Hi,
yes, the Flink Kafka connector for Kafka 0.8 handles broker leader changes
without failing. The SimpleConsumer provided by Kafka 0.8 doesn't handle
that.
The 0.9 Flink Kafka consumer also supports broker leader changes
transparently.
If you keep using the Flink Kafka 0.8 connector with a 0.9 b
Hi everybody,
I finally reached streaming territory. For a student project I want to
implement CluStream for Flink. I guess this is especially interesting to
try queryable state :)
But I have problems at the first steps. My input data is a csv file of
records. For the start I just want to window
I am using Flink 1.2-Snapshot. My data looks like the following:
- id=25398102, sourceId=1, ts=2016-10-15 00:00:56, user=14, value=919
- id=25398185, sourceId=1, ts=2016-10-15 00:01:06, user=14, value=920
- id=25398210, sourceId=1, ts=2016-10-15 00:01:16, user=14, value=944
- id=253982