Hi,
My understanding about the RocksDB state backend is as follows:
When using a RocksDB state backend, it the checkpoints are backed up
locally (to the TaskManager) using the backup feature of RocksDB by taking
snapshots from RocksDB which are consistent read-only views on the RockDB
database. E
> I wonder if it can be solved by storing state in the external store with a
> tuple: (state, previous_state, checkpoint_id). Then when reading from the
> store, if checkpoint_id is in the future, read the previous_state,
> otherwise take the current_state.
>
I think it did "filtering" part work.
Hi everyone,
I wanted to share this with the community: we have announced the first
round of speakers and sessions of Flink Forward 2016, and it looks amazing!
Check it out here: http://flink-forward.org/program/sessions/
This year we have a great mix of use case talks (e.g., by Netflix, Alibaba
Nevermind I think I understand the point about partial writes. Is it that
if we write out our buffer of updates to the external store and the batch
update is not atomic, then the external store is in an inconsistent state?
(with some state from the attempted checkpoint, and some from the previous
c
Hi Chen,
Can you explain what you mean a bit more? I'm not sure I understand the
problem.
Does anyone know if the tooling discussed here has been merged into Flink
already? Or if there's an example of what this custom sink would look like?
I guess the sink would buffer updates in-memory between c
Hi everyone,
I am reading messages from a Kafka topic with 2 partitions and using event
time. This is my code:
.assignTimestampsAndWatermarks(new AscendingTimestampExtractor() {
@Override
public long extractAscendingTimestamp(Request req) {
return req.ts;
}
})
.windowAll(Tumbl
Hi!
In the latest master and in the upcoming 1.1, all files in the lib
folder will be shipped to the Yarn cluster and added to the class
path. In Flink version <= 1.0.x no files will be added to the ship
files by default (only the flink-dist*.jar will be shipped).
Regardless of the version, if yo
Looping in Max directly because he probably knows the Yarn stuff best.
@Max: Do you have any idea how to do this?
On Fri, 22 Jul 2016 at 05:46 김동일 wrote:
> I’saw the source code
> of
> flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java
> Flink ships the FLINK_LIB
Hi,
one master should be enough to get Flink working. Did you ensure that all the
configuration files are complete, correct, and also distributed to all nodes in
the cluster? If you still can not start Flink after checking that, relevant
sections from the log and your configuration would be hel
Everything in FLINK_LIB_DIR should be shipped and automatically included in the
classpath. I assume there might be a problem in how you try to access the files
on the classpath. Maybe you an provide more details on that if the problem
remains.
> Am 25.07.2016 um 07:29 schrieb Dong-iL, Kim :
>
As far as I can see from the example, you are trying to filter by key and
„flatten“ nested maps for each record. Both, data set and data stream API (from
the question it is unclear which one you would like to use, but it works with
both) provide transformations that can do this for you. For an o
Hey everyone,
the last few days I looked into the approach the King-Team took. And another
question to my original point 3 arose:
To 3) Would an approach similar to King/RBEA even be possible combined with
Flink CEP? As I understand, Patterns have to be defined in Java code and
therefore have
12 matches
Mail list logo