hi,
We have a flink job platform which will resubmit the job when the job
failed without platform user involvement. Today a resubmit failed because
of the error below, I changed the akka.Frameszie, and the resubmit succeed.
My question is, there is nothing change to the job, the jar, the program,
Hi
When a checkpoint should be deleted, FsCompletedCheckpointStorageLocation.
disposeStorageLocation will be called.
Inside it, fs.delete(exclusiveCheckpointDir, false) will do the delete
action. I wonder why the recursive parameter is set to false? as the
exclusiveCheckpointDir is truly a directo
the code CompletedCheckpoint.discard(), Flink is
> removing all the files in StateUtil.bestEffortDiscardAllStateObjects, then
> deleting the directory.
>
> Which files are left over in your case?
> Do you see any exceptions on the TaskManagers?
>
> Best,
> Robert
>
> On Wed, Nov 11, 2020 a
failed because of that.
> Can you share the full exception to check this?
> And probably check what files exist there as Robert suggested.
>
> Regards,
> Roman
>
>
> On Tue, Nov 17, 2020 at 10:38 AM Joshua Fan
> wrote:
>
>> Hi Robert,
>>
>> When
Hi,
Does the flink community have a plan to support flink sql udf in any
language? For example, a udf in c or php. Because in my company, many
developers do not know java or scala, they use c in their usual work.
Now we have a workaround to support this situation by creating a process
running the
Hi Jark and Benchao
I have learned from your previous email on how to do pv/uv in flink sql.
One is to make a MMdd grouping, the other is to make a day window.
Thank you all.
I have a question about the result output. For MMdd grouping, every
minute the database would get a record, and m
Hi
I have learned from the community on how to do pv/uv in flink sql. One is
to make a MMdd grouping, the other is to make a day window. Thank you
all.
I have a question about the result output. For MMdd grouping, every
minute the database would get a record, and many records would be in
for your information.
Yours sincerely
Josh
Joshua Fan 于2021年5月18日周二 上午10:15写道:
> Hi Stephan, Till
>
> Recently, I tried to upgrade a flink job from 1.7 to 1.11, unfortunately,
> the weird problem appeared, " SIGSEGV (0xb) at pc=0x0025,
> pid=135306, tid=140439001388
help us narrow down the culprit. Moreover, you could try to run
> your job and Flink with Java 11 now.
>
> Cheers,
> Till
>
> On Tue, May 18, 2021 at 5:10 AM Joshua Fan wrote:
>
>> Hi all,
>>
>> Most of the posts says that "Most of the times, th
Hi All,
I'd like to know the difference between data stream window function and cep
within, I googled this issue but found no useful information.
Below the cep within, is there a tumbling window or sliding window or just
a process function?
Your explanation will be truly appreciated.
Yours sinc
against the
> within interval. You can refer to [1] for details.
>
> Regards,
> Dian
>
> [1]
> https://github.com/apache/flink/blob/459fd929399ad6c80535255eefa278564ec33683/flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/NFA.java#L251
>
>
> 在 2019年9月1
Hi Till
After got your advice, I checked the log again. It seems not wholely the
same as the condition you mentioned.
I would like to summarize the story in the belowed log.
Once a time, the zk connection was not stable, so there happened 3 times
suspended-reconnected.
After the first suspende
Sorry to forget the version, we run flink 1.7 on yarn in a ha mode.
On Fri, Oct 11, 2019 at 12:02 PM Joshua Fan wrote:
> Hi Till
>
> After got your advice, I checked the log again. It seems not wholely the
> same as the condition you mentioned.
>
> I would like to summariz
Hi,
I'd like to submit a job with dependency jars by flink run, but it failed.
Here is the script,
/usr/bin/hadoop/software/flink-1.4.2/bin/flink run \
-m yarn-cluster -yn 1 -ys 8 -yjm 2148 -ytm 4096 -ynm jarsTest \
-c StreamExample \
-C file:/home/work/xxx/lib/commons-math3-3.5.jar \
-C file:/h
Hi,all
Frequently, for some cluster, there is no data from Task Manager in UI, as
the picture shows below.
[image: tm-hang.png]
but the cluster and the job is running well, just no metrics can be got.
anything can do to improve this?
Thanks for your assistance.
Your sincerely
Joshua
Hi Niels,
Probably not, an operator begins to do checkpoint until it gets all the
barriers from all the upstream sources, if one source can not send a
barrier, the downstream operator can not do checkpoint, FYI.
Yours sincerely
Joshua
On Wed, Oct 17, 2018 at 4:58 PM Niels van Kaam wrote:
> Hi
nothing unnormal found.
anyone can give some hints?
Yours sincerely
Joshua
On Wed, Oct 17, 2018 at 5:05 PM Joshua Fan wrote:
> Hi,all
>
> Frequently, for some cluster, there is no data from Task Manager in UI,
> as the picture shows below.
> [image: tm-hang.png]
> but the c
Hi Vino,
the version is 1.4.2.
Yours
Joshua
On Tue, Oct 23, 2018 at 7:26 PM vino yang wrote:
> Hi Joshua,
>
> Which version of Flink are you using?
>
> Thanks, vino.
>
> Joshua Fan 于2018年10月23日周二 下午5:58写道:
>
>> Hi All
>>
>> came into new situations
Hi, Till and users,
There is a weird behavior in actorSystem shutdown in akka of our flink
platform.
We use flink 1.4.2 on yarn as our flink deploy mode, and we use an ongoing
agent to submit flink job to yarn which is based on YarnClient. User can
connect to the agent to submit job and disconnect
Hi All
I want to build a custom flink image to run on k8s, below is my Dockerfile
content:
> FROM apache/flink:1.13.1-scala_2.11
> ADD ./flink-s3-fs-hadoop-1.13.1.jar /opt/flink/lib
> ADD ./flink-s3-fs-presto-1.13.1.jar /opt/flink/lib
>
I just put the s3 fs dependency to the {flink home}/lib, and
stem files into the plugins [1]
> directory to avoid classloading issues.
> Also, you don't need to build custom images if you want to use build-in
> plugins [2]
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/filesystems/plugins/
>
It seems I set a wrong high-availability.storageDir,
s3://flink-test/recovery can work, but s3:///flink-test/recovery can not,
one / be removed.
Joshua Fan 于2021年8月5日周四 上午10:43写道:
> Hi Robert, Tobias
>
> I have tried many ways to build and validate the image.
>
> 1.put the s
from. Hope
you all are doing it well.
Joshua Fan 于2021年8月5日周四 上午11:42写道:
> It seems I set a wrong high-availability.storageDir,
> s3://flink-test/recovery can work, but s3:///flink-test/recovery can not,
> one / be removed.
>
> Joshua Fan 于2021年8月5日周四 上午10:43写道:
>
>> Hi
Hi All
My flink version is 1.11, the statebackend is rocksdb, and I want to write
a flink job to implement an adaptive window. I wrote a flink dag like below:
> DataStream entities = env.addSource(new
> EntitySource()).setParallelism(1);
>
> entities.keyBy(DataEntity::getName).process(new
> Enti
Hi David,
Thanks for you reply.
Yes, for keyed state, every state is referenced by a particular key, but I
would guess it is a flink sdk issue, I mean, the keyed state maybe saved
as (key, keyed state), as for my situation, it is (key, mapstate(UK,UV)),
I think the key of this pair is not easy to
the guarantees
> you're looking for. Then you also won't need to override the snapshot /
> initialize state methods, which simplifies the code a lot.
>
> D.
>
> On Wed, Dec 29, 2021 at 2:08 PM Joshua Fan wrote:
>
>> Hi David,
>> Thanks for you reply.
>>
Hi,
It is very weird that there is no log file for JM and TM when run flink job
on yarn after updated flink to 1.7.on Flink 1.4.2, everything is OK. I
checked the log directory, there were jobmanager.error and jobmanager.out,
but without jobmanager.log, but the log message which should exist in
jo
wow, I met similar situation using flink 1.7 on yarn.
there was no jobmanager.log on the node but jobmanager.out and
jobmanager.error, and jobmanager.error contains the log message. so , there
was nothing in the webUI.
I do not know why this happened. by the way, I used logback to do log staff.
IRA exists for this issue?
>
> @Joshua: Does this also happen if you use log4j?
>
> On 26.12.2018 11:33, Joshua Fan wrote:
>
> wow, I met similar situation using flink 1.7 on yarn.
>
> there was no jobmanager.log on the node but jobmanager.out and
> jobmanager.error, and job
Hi, Gyula
I met a similar situation.
We used flink 1.4 before, and everything is ok.
Now, we upgrade to flink 1.7 and use non-legacy mode, there seems something
not ok, it all refers to that it is impossible get the jobmanagerGateway at
client side. When I create a cluster without a job, I descr
ll
>
> On Wed, Dec 26, 2018 at 11:19 AM Joshua Fan
> wrote:
>
>> Hi,
>>
>> It is very weird that there is no log file for JM and TM when run flink
>> job on yarn after updated flink to 1.7.on Flink 1.4.2, everything is OK. I
>> checked the log directory,
acy clusters, for 1.5+ you should use the
>> RestClusterClient instead.
>>
>> On 03.01.2019 08:32, Joshua Fan wrote:
>> > Hi, Gyula
>> >
>> > I met a similar situation.
>> >
>> > We used flink 1.4 before, and everything is ok.
>> >
Hi,
I want to test flink sql locally by consuming kafka data in flink 1.7, but
it turns out an exception like below.
Exception in thread "main"
>> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find
>> a suitable table factory for
>> 'org.apache.flink.table.factories.Stream
There seems to be a
> mismatch between your screenshot and the exception.
>
> Regards,
> Timo
>
> Am 11.01.19 um 15:43 schrieb Joshua Fan:
>
> Hi,
>
> I want to test flink sql locally by consuming kafka data in flink 1.7, but
&
Hi Zhenghua
Yes, the topic is polluted somehow. After I create a new topic to consume,
It is OK now.
Yours sincerely
Joshua
On Tue, Jan 15, 2019 at 4:28 PM Zhenghua Gao wrote:
> May be you're generating non-standard JSON record.
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-a
Hi
As known, TableFactoryService has many methods to find a suitable service
to load. Some of them use a user defined classloader, the others just uses
the default classloader.
Now I use ConnectTableDescriptor to registerTableSource in the environment,
which uses TableFactoryUtil to load service,
tableSource);
>
>
> Best, Hequn
>
> On Tue, Jan 15, 2019 at 6:43 PM Joshua Fan wrote:
>
>> Hi
>>
>> As known, TableFactoryService has many methods to find a suitable service
>> to load. Some of them use a user defined classloader, the others just uses
>&
, Jan 16, 2019 at 10:24 AM Joshua Fan wrote:
> Hi Hequn
>
> Yes, the TableFactoryService has a proper method. As I
> use StreamTableDescriptor to connect to Kafka, StreamTableDescriptor
> actually uses ConnectTableDescriptor which calls TableFactoryUtil to do
> service load, an
Hi, all
As the title says, the submitting is always hanging there when the cache
file is not reachable, actually because the RestClient uses a java.io.File
to get the cache file.
I use RestClusterClient to submit job in Flink 1.7.
Below is instructions shown in
https://ci.apache.org/projects/fli
Hi all
When I run the python example in flink 1.7, it always got a excepthon.
The command is: ./bin/pyflink.sh ./examples/python/streaming/word_count.py
The return message is:
2019-05-17 11:43:22,900 INFO org.apache.hadoop.yarn.client.RMProxy
- Connecting to ResourceManager at
.
I am not familiar with python. Thanks for your help.
On Fri, May 17, 2019 at 11:47 AM Joshua Fan wrote:
> Hi all
>
> When I run the python example in flink 1.7, it always got a excepthon.
>
> The command is: ./bin/pyflink.sh ./examples/python/streaming/word_count.py
>
>
Zhijiang
>
> --
> From:Chesnay Schepler
> Send Time:2019年6月21日(星期五) 16:34
> To:zhijiang ; Joshua Fan <
> joshuafat...@gmail.com>
> Cc:user ; Till Rohrmann
> Subject:Re: Maybe a flink bug. Job keeps in FAILI
42 matches
Mail list logo