So, regarding my question, is the behaviour I am observing something that can
be attributed to potential differences between local (IntelliJ) and
standalone modes, or this point has nothing to do and it is more of a
coincidence due to the inherent non-determinism you mention?
--
Sent from: http:
Using Flink 1.9.2, Java, FsStateBackend.
I was getting com.esotericsoftware.kryo.KryoException:
java.lang.NullPointerException on a value() operation on a ValueState variable
in a KeyedProcessFunction.
The object stored in state contained 2 PriorityQueue fields and the error
message indicated t
Hi Lian,
sorry for the late reply.
1. All serialization related functions are just implementation of API
interfaces. As such, you can implement serializers yourself. In this case,
you could simply copy the code from 1.12 into your application. You may
adjust a few things that are different betwee
2.1) You overall seem to be on the right track.
- 1.1) maxAttempts is used by the source to initiate a clean shutdown;
if the source(s) of a streaming job shut down the rest of pipeline does
too once there is no more data to process.
- the exception you see is expected; we are killing a task
Great to hear that it works now :-)
On Fri, Oct 2, 2020 at 2:17 AM Lian Jiang wrote:
> Thanks Till. Making the scala version consistent using 2.11 solved the
> ClassNotFoundException.
>
> On Tue, Sep 29, 2020 at 11:58 PM Till Rohrmann
> wrote:
>
>> Hi Lian,
>>
>> I suspect that it is caused by
Yes, the patch call only triggers the cancellation.
You can check whether it is complete by polling the job status via
jobs/ and checking whether state is CANCELED.
On 9/27/2020 7:02 PM, Eleanore Jin wrote:
I have noticed this: if I have Thread.sleep(1500); after the patch
call returned 202, t
Hi Sateesh,
my suspicion would be that your custom Sink Function is leaking connections
(which also count for the file limit). Is there a reason that you cannot
use the ES connector of Flink?
I might have more ideas when you share your sink function.
Best,
Arvid
On Sun, Sep 27, 2020 at 7:16 PM
Also you could check if Java11 profile in Maven was (de)activated for some
reason.
On Mon, Sep 28, 2020 at 3:29 PM Piotr Nowojski wrote:
> Hi,
>
> It sounds more like an Intellij issue, not a Flink issue. But have you
> checked your configured target language level for your modules?
>
> Best reg
Hi Austin,
yes, it should also work for ingestion time.
I am not entirely sure whether event time is preserved when converting a
Table into a retract stream. It should be possible and if it is not
working, then I guess it is a missing feature. But I am sure that @Timo
Walther knows more about it
Hi Andreas,
yes two Flink session clusters won't share the same BlobServer.
Is the problem easily reproducible? If yes, then it could be very helpful
to monitor the backlog length as Chesnay suggested.
One more piece of information is that we create a new TCP connection for
every blob we are dow
Hi Lian,
Thank you for reporting. It looks like a bug to me and I created a ticket
[1].
You have two options: wait for the fix or implement the fix yourself (copy
AvroSerializerSnapshot and use another way to write/read the schema), then
subclass AvroSerializer to use your snapshot. Of course, we
Hi Edward,
you are right to assume that the non-blocking version is the better fit.
You are also correct to assume that kryo just can't handle the underlying
fields.
I'd just go a different way to solve it: add your custom serializer for
PriorityQueue.
There is one [1] for the upcoming(?) Kryo v
Hi,
you are missing the Hadoop libraries, hence there is no hdfs support.
In Flink 1.10 and earlier, you would simply copy flink-shaded-hadoop-2-uber[1]
into your opt/ folder. However, since Flink 1.11, we recommend to install
Hadoop and point to it with HADOOP_CLASSPATH.
Now, the latter approac
Hi Dan,
I'm assuming that you have different Kafka topics, and each topic contains
messages of a single protobuf type.
In that case, you have to specify the mapping between a topic name to it's
Protobuf message type.
To do that, assume that you have a Kafka topic *A* that contains protobuf
messag
I want to do an experiment of"incremental checkpoint"
my code is:
https://paste.ubuntu.com/p/DpTyQKq6Vk/
pom.xml is:
http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.or
It looks like you were trying to resume from a checkpoint taken with the
FsStateBackend into a revised version of the job that uses the
RocksDbStateBackend. Switching state backends in this way is not supported:
checkpoints and savepoints are written in a state-backend-specific format,
and can only
I want to do an experiment with the operator "aggregate"
My code is:
Aggregate.java
https://paste.ubuntu.com/p/vvMKqZXt3r/
UserActionLogPOJO.java
https://paste.ubuntu.com/p/rfszzKbxDC/
The error I got is:
Exception in thread "main"
org.apache.flink.api.common.typeutils.CompositeType$
Thanks for your replies~!
My English is poor ,I have an understanding of your replies:
Write in RocksDbStateBackend.
Read in FsStateBackend.
It's NOT a match.
So I'm wrong in step 5?
Is my above understanding right?
Thanks for your help.
-- --
??
>
>
> *Write in RocksDbStateBackend.*
> *Read in FsStateBackend.**It's NOT a match.*
Yes, that is right. Also, this does not work:
Write in FsStateBackend
Read in RocksDbStateBackend
For questions and support in Chinese, you can use the
user...@flink.apache.org. See the instructions at
https://
Thanks for your replies~!
Could you tell me what the right command is to recover from checkpoint
manually using Rocksdb file?
I understand that checkpoint is for automatically recovery,
but in this experiment I stop it by force(input 4 error in nc -lk ),
Is there a way to recover from inc
If hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3
was written by the RocksDbStateBackend, then you can use it to recover if
the new job is also using the RocksDbStateBackend. The command would be
$ bin/flink run -s
hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33a
Thanks a lot for the confirmation.
Eleanore
On Fri, Oct 2, 2020 at 2:42 AM Chesnay Schepler wrote:
> Yes, the patch call only triggers the cancellation.
> You can check whether it is complete by polling the job status via
> jobs/ and checking whether state is CANCELED.
>
> On 9/27/2020 7:02 PM,
Hey Till,
Thanks for the notes. Yeah, the docs don't mention anything specific to
this case, not sure if it's an uncommon one. Assigning timestamps on
conversion does solve the issue. I'm happy to take a stab at implementing
the feature if it is indeed missing and you all think it'd be worthwhile.
Hi 大森林,
if you look in the full logs you'll see
3989 [main] INFO org.apache.flink.api.java.typeutils.TypeExtractor [] -
class org.apache.flink.test.checkpointing.UserActionLogPOJO does not
contain a getter for field itemId
3999 [main] INFO org.apache.flink.api.java.typeutils.TypeExtractor [] -
Thanks, Timo and Piotr!
I figured out my issue. I called env.disableOperatorChaining(); in my
developer mode. Disabling operator chaining created the redundant joins.
On Mon, Sep 28, 2020 at 6:41 AM Timo Walther wrote:
> Hi Dan,
>
> unfortunetely, it is very difficult to read you plan? Mayb
We have a use case wherein counters emitted by flink are decremented after
being reported. In this way we report only the change in the counter.
Currently it seems that FlinkCounterWrapper doesnt mutate the wrapped
counter when either inc or dec is called; would this be a valid improvement?
https
Furthermore, it looks like the rest of the dropwizard wrappers all have the
mutators implemented.
https://issues.apache.org/jira/browse/FLINK-19497
On Fri, Oct 2, 2020 at 2:30 PM Richard Moorhead
wrote:
> We have a use case wherein counters emitted by flink are decremented after
> being reporte
Appreciate Arvid for the jira and the workaround. I will monitor the jira
status and retry when the fix is available. I can help test the fix when it
is in a private branch. Thanks. Regards!
On Fri, Oct 2, 2020 at 3:57 AM Arvid Heise wrote:
> Hi Lian,
>
> Thank you for reporting. It looks like a
where's the actual path?
I can only get one path from the WEB UI
Is it possible that this error happened in step 5 is due to my code's
fault?
-- --
??:
Thanks for your help~
I have solved this problem under your guidance
Close this issue please.
MUCH THANKS
-- --
??:
"Arvid Hei
Hi
First of all, as David said, the reason why you get "Unexpected state handle
type, expected: class org.apache.flink.runtime.state.KeyGroupsStateHandle, but
found: class org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle"
is because of you use checkpoint written in RocksDB to re
31 matches
Mail list logo