>
>
> *Write in RocksDbStateBackend.*
> *Read in FsStateBackend.**It's NOT a match.*


Yes, that is right. Also, this does not work:

Write in FsStateBackend
Read in RocksDbStateBackend

For questions and support in Chinese, you can use the
user...@flink.apache.org. See the instructions at
https://flink.apache.org/zh/community.html for how to join the list.

Best,
David

On Fri, Oct 2, 2020 at 4:45 PM 大森林 <appleyu...@foxmail.com> wrote:

> Thanks for your replies~!
>
> My English is poor ,I have an understanding of your replies:
>
> *Write in RocksDbStateBackend.*
> *Read in FsStateBackend.*
> *It's NOT a match.*
> So I'm wrong in step 5?
> Is my above understanding right?
>
> Thanks for your help.
>
> ------------------ 原始邮件 ------------------
> *发件人:* "David Anderson" <dander...@apache.org>;
> *发送时间:* 2020年10月2日(星期五) 晚上10:35
> *收件人:* "大森林"<appleyu...@foxmail.com>;
> *抄送:* "user"<user@flink.apache.org>;
> *主题:* Re: need help about "incremental checkpoint",Thanks
>
> It looks like you were trying to resume from a checkpoint taken with the
> FsStateBackend into a revised version of the job that uses the
> RocksDbStateBackend. Switching state backends in this way is not supported:
> checkpoints and savepoints are written in a state-backend-specific format,
> and can only be read by the same backend that wrote them.
>
> It is possible, however, to migrate between state backends using the State
> Processor API [1].
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html
>
> Best,
> David
>
> On Fri, Oct 2, 2020 at 4:07 PM 大森林 <appleyu...@foxmail.com> wrote:
>
>> *I want to do an experiment of"incremental checkpoint"*
>>
>> my code is:
>>
>> https://paste.ubuntu.com/p/DpTyQKq6Vk/
>>
>>
>>
>> pom.xml is:
>>
>> <?xml version="1.0" encoding="UTF-8"?>
>> <project xmlns="http://maven.apache.org/POM/4.0.0";
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
>> http://maven.apache.org/xsd/maven-4.0.0.xsd";>
>> <modelVersion>4.0.0</modelVersion>
>>
>> <groupId>example</groupId>
>> <artifactId>datastream_api</artifactId>
>> <version>1.0-SNAPSHOT</version>
>> <build>
>> <plugins>
>> <plugin>
>> <groupId>org.apache.maven.plugins</groupId>
>> <artifactId>maven-compiler-plugin</artifactId>
>> <version>3.1</version>
>> <configuration>
>> <source>1.8</source>
>> <target>1.8</target>
>> </configuration>
>> </plugin>
>>
>> <plugin>
>> <groupId>org.scala-tools</groupId>
>> <artifactId>maven-scala-plugin</artifactId>
>> <version>2.15.2</version>
>> <executions>
>> <execution>
>> <goals>
>> <goal>compile</goal>
>> <goal>testCompile</goal>
>> </goals>
>> </execution>
>> </executions>
>> </plugin>
>>
>>
>>
>> </plugins>
>> </build>
>>
>> <dependencies>
>>
>> <!--
>> https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-scala
>> -->
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-streaming-scala_2.11</artifactId>
>> <version>1.11.1</version>
>> <!-<scope>provided</scope>->
>> </dependency>
>>
>> <!-<dependency>->
>> <!-<groupId>org.apache.flink</groupId>->
>> <!-<artifactId>flink-streaming-java_2.12</artifactId>->
>> <!-<version>1.11.1</version>->
>> <!-<!–<scope>compile</scope>–>->
>> <!-</dependency>->
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-clients_2.11</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>>
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-statebackend-rocksdb_2.11</artifactId>
>> <version>1.11.2</version>
>> <!-<scope>test</scope>->
>> </dependency>
>>
>> <dependency>
>> <groupId>org.apache.hadoop</groupId>
>> <artifactId>hadoop-client</artifactId>
>> <version>3.3.0</version>
>> </dependency>
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-core</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>> <!-<dependency>->
>> <!-<groupId>org.slf4j</groupId>->
>> <!-<artifactId>slf4j-simple</artifactId>->
>> <!-<version>1.7.25</version>->
>> <!-<scope>compile</scope>->
>> <!-</dependency>->
>>
>> <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-cep -->
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-cep_2.11</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-cep-scala_2.11</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>> <dependency>
>> <groupId>org.apache.flink</groupId>
>> <artifactId>flink-scala_2.11</artifactId>
>> <version>1.11.1</version>
>> </dependency>
>>
>>
>>
>> <dependency>
>> <groupId>org.projectlombok</groupId>
>> <artifactId>lombok</artifactId>
>> <version>1.18.4</version>
>> <!-<scope>provided</scope>->
>> </dependency>
>>
>> </dependencies>
>> </project>
>>
>>
>>
>> the error I got is:
>>
>> https://paste.ubuntu.com/p/49HRYXFzR2/
>>
>>
>>
>> *some of the above error is:*
>>
>> *Caused by: java.lang.IllegalStateException: Unexpected state handle
>> type, expected: class org.apache.flink.runtime.state.KeyGroupsStateHandle,
>> but found: class
>> org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle*
>>
>>
>>
>>
>>
>> The steps are:
>>
>> 1.mvn clean scala:compile compile package
>>
>> 2.nc -lk 9999
>>
>> 3.flink run -c wordcount_increstate  datastream_api-1.0-SNAPSHOT.jar
>> Job has been submitted with JobID df6d62a43620f258155b8538ef0ddf1b
>>
>> 4.input the following conents in nc -lk 9999
>>
>> before
>> error
>> error
>> error
>> error
>>
>> 5.
>>
>> flink run -s
>> hdfs://Desktop:9000/tmp/flinkck/df6d62a43620f258155b8538ef0ddf1b/chk-22 -c
>> StateWordCount datastream_api-1.0-SNAPSHOT.jar
>>
>> Then the above error happens.
>>
>>
>>
>> Please help,Thanks~!
>>
>>
>> I have tried to subscried to user@flink.apache.org;
>>
>> but no replies.If possible ,send to appleyu...@foxmail.com with your
>> valuable replies,thanks.
>>
>>
>>
>

Reply via email to