de
public boolean filter(byte[] value) throws Exception {
if (value.length > 0) {
return true;
}
return false;
}
}).sinkTo(sink);
try {
env.execute("FlinkDelayTest");
} catch (Exception e) {
e.printStackTrace();
}
}
}
max...@foxmail.com
Hi Stephan,
thanks for the quick answer! I try to go to an older revision,
Best,
Max
2016-10-05 12:10 GMT+02:00 Stephan Ewen :
> Hi!
>
> The master has a temporary regression due to the Work In Progress for the
> "changing parallelism of savepoints" feature.
> We'
I went back to commit 2afc092461cf68cf0f3c26a3ab4c58a7bd68cf71 on MASTER,
seems to work.
2016-10-05 15:48 GMT+02:00 static-max :
> Hi Stephan,
>
> thanks for the quick answer! I try to go to an older revision,
>
> Best,
> Max
>
> 2016-10-05 12:10 GMT+02:00 Stephan E
Hi,
I have a low throughput job (approx. 1000 messager per Minute), that
consumes from Kafka und writes directly to HDFS. After an hour or so, I get
the following warnings in the Task Manager log:
2016-10-10 01:59:44,635 WARN org.apache.hadoop.hdfs.DFSClient
- Slow ReadProcessor
Hi,
I get many (multiple times per minute) errors in my Namenode HDFS logfile:
2016-10-11 17:17:07,596 INFO ipc.Server (Server.java:logException(2401)) -
IPC Server handler 295 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.delete from datanode1:34872
Call#2361 Retry#0
org.apache.h
r any computation, just reading from Kafka and directly
writing to HDFS.
I can also run a terasort or teragen in parallel without any problems.
Best,
Max
2016-10-12 11:32 GMT+02:00 Robert Metzger :
> Hi,
> I haven't seen this error before. Also, I didn't find anything helpful
> searching
the Flink JVMs, so that should not be an issue.
I also checked the number of open files for user "yarn" with "lsof -u yarn
| wc -l" and I got ~ 4000 open files when the errors occured in the logs,
so there should be room for more.
Any idea how to solve this?
Thanks
Hi Stephan,
it's not a problem, but makes finding other errors on my NameNode
complicated as I have this error message every minute.
Can't we just delete the directory recursively?
Regards,
Max
2016-10-11 17:59 GMT+02:00 Stephan Ewen :
> Hi!
>
> I think to some extend this
Update: I deleted the /flink/recovery folder on HDFS and even then I get
the same Exception after the next checkpoint.
2016-11-21 21:51 GMT+01:00 static-max :
> Hi Stephan,
>
> it's not a problem, but makes finding other errors on my NameNode
> complicated as I have this er
Record < List
So my question is, do you see a possible way to create a variable length
tuple type which can grow almost indefinitely while keeping most of the
benefits of the TupleXX classes but skipping lots of the overhead of
Record (like keeping track of possible null values etc)
Thanks a lot
Best Max
Hi Kostas,
Great post.
I think it would be good to have an additional mailing list for news.
For open source projects, it's pretty common to have an announce
mailing list. In addition to the monthly newsletter we could also
announce releases.
Best regards,
Max
On Fri, Jan 9, 2015 at 12:
ce();
}
}
}
If you have set up Flink correctly, you can also access HDFS in the
Flink job. Let me know if this is what you had in mind.
Best regards,
Max
Impressive number of votes. Congratulations, Márton!
On Fri, Jan 30, 2015 at 9:50 AM, Till Rohrmann wrote:
> Great news Márton. Congrats!
>
> On Jan 30, 2015 2:41 AM, "Henry Saputra" wrote:
>>
>> W00t! Congrats guys!
>>
>> On Thu, Jan 29, 2015 at 4:06 PM, Márton Balassi
>> wrote:
>> > Hi everyo
ave to use the
full HDFS URL. For example, if your namenode's address is
namenode.example.com at port , then use
hdfs://namenode.example.com:/wc_input.
Kind regards,
Max
On Tue, Feb 24, 2015 at 11:13 AM, Giacomo Licari
wrote:
> Hi guys,
> I'm Giacomo from Italy, I'm n
Hi!
There is a pending pull request for this feature. If that is what you
had in mind:
https://github.com/apache/flink/pull/410
Best regards,
Max
On Mon, Mar 2, 2015 at 5:11 PM, Alexander Alexandrov
wrote:
> AFAIK at the moment this is not supported but at the TU Berlin we have a
>
What can cause this state, and did the job actually savepoint when this
happens? What, if any, atomicity occurs in writing savepoints?
Best,
Max
tempt to resume from previous state,
as the checkpoint was no longer referenced.
We understand the root cause of the operator error, but we would expect
that the externalized checkpoint reference would be retained in this
failure mode. Has anyone else run into this issue?
Best,
Max Feng
high availability data for job
.
[4] No checkpoint found during restore.
Best,
Max Feng
. Are
there any obvious reasons we should not do this?
- Max
On Sun, Mar 2, 2025 at 5:41 PM Max Feng wrote:
> Hi,
>
> We're running Flink 1.20, native kubernetes application-mode clusters, and
> we're running into an issue where clusters are restarting without
> checkp
19 matches
Mail list logo