> command ?
>
> Best,
> Yun
>
>
> --Original Mail --
> *Sender:*赵一旦
> *Send Date:*Sun Feb 7 16:13:57 2021
> *Recipients:*Till Rohrmann
> *CC:*Robert Metzger , user
> *Subject:*Re: flink kryo exception
>
>> It also maybe
task have some self-defined source or implementation,
I do not know whether the problem have something to do with it.
赵一旦 于2021年2月7日周日 下午4:05写道:
> The first problem is critical, since the savepoint do not work.
> The second problem, in which I changed the solution, removed the 'Map
out the problem
> you are experiencing. Thanks a lot for your help.
>
> Cheers,
> Till
>
> On Fri, Feb 5, 2021 at 1:03 PM 赵一旦 wrote:
>
>> Yeah, and if it is different, why my job runs normally. The problem only
>> occurres when I stop it.
>>
>> Robert
stributed standalone setup that some files are different)
>
>
> On Fri, Feb 5, 2021 at 12:00 PM 赵一旦 wrote:
>
>> Flink1.12.0; only using aligned checkpoint; Standalone Cluster;
>>
>>
>>
>> Robert Metzger 于2021年2月5日周五 下午6:52写道:
>>
>>> A
etc.)
>
> On Fri, Feb 5, 2021 at 10:36 AM 赵一旦 wrote:
>
>> I do not think this is some code related problem anymore, maybe it is
>> some bug?
>>
>> 赵一旦 于2021年2月5日周五 下午4:30写道:
>>
>>> Hi all, I find that the failure always occurred in the second
to use a simple POJO instead of inheriting from a HashMap?
>
> The stack trace looks as if the job fails deserializing some key of your
> MapRecord map.
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/types_serialization.html#most-frequent-issues
>
>
I do not think this is some code related problem anymore, maybe it is some
bug?
赵一旦 于2021年2月5日周五 下午4:30写道:
> Hi all, I find that the failure always occurred in the second task, after
> the source task. So I do something in the first chaining task, I transform
> the 'Map' ba
;
}
}
}
Class UserAccessLog:
public class UserAccessLog extends AbstractRecord {
private MapRecord d; // I think this is related to the problem...
... ...
}
赵一旦 于2021年2月3日周三 下午6:43写道:
> Actually the exception is different every time I stop the job.
> Such as:
> (
>
> From the stack trace, it looks as if the class g^XT is not on the class
> path.
>
> Cheers,
> Till
>
> On Wed, Feb 3, 2021 at 10:30 AM 赵一旦 wrote:
>
>> I have a job, the checkpoint and savepoint all right.
>> But, if I stop the job using 'stop -p'
I have a job, the checkpoint and savepoint all right.
But, if I stop the job using 'stop -p', after the savepoint generated, then
the job goes to fail. Here is the log:
2021-02-03 16:53:55,179 WARN org.apache.flink.runtime.taskmanager.Task
[] - ual_ft_uid_subid_SidIncludeFilter ->
As the title, my query sql is very simple, it just select all columns from
a hive table(version 1.2.1; orc format). When the sql is submitted, after
several seconds, the jobmanager is failed. Here is the Jobmanager's log.
Does anyone can help to this problem?
2021-01-24 04:41:24,952 ERROR
org.apa
I think you need provide all the parallelism information, such like the
operator info 'Id: b0936afefebc629e050a0f423f44e6ba, maxparallsim: 4096'.
What is the parallelism, the maxparallism maybe be generated from the
parallelism you have set.
Arvid Heise 于2021年1月22日周五 下午11:03写道:
> Hi Lu,
>
> if y
If you changed the consumer group in your new job, the group id will be the
new one you set.
The job will continue to consumer the topics from the savepoint/checkpoint
you specified no matter whether the group id is the original one?
Rex Fenley 于2021年1月18日周一 下午12:53写道:
> Hello,
>
> When using th
I've had a problem many times. When the task suddenly continues to back
pressure, the back pressure node will no longer send any records unless the
task is restarted. But I can confirm that it's not due to high pressure.
During the back pressure period, the CPU utilization of the machine is all
red
I think what you need is
http://kafka.apache.org/documentation/#consumerconfigs_isolation.level .
The isolation.level setting's default value is read_uncommitted. So, maybe
you do not use the default setting?
赵一旦 于2021年1月5日周二 下午9:10写道:
> I do not have this problem, so I guess it is
I do not have this problem, so I guess it is related with the config of
your kafka producer and consumer, and maybe kafka topic properties or kafka
server properties also.
Arvid Heise 于2021年1月5日周二 下午6:47写道:
> Hi Daniel,
>
> Flink commits transactions on checkpoints while Kafka Streams/connect
>
16 matches
Mail list logo