Hello Admin,
When I run "tail -f log/flink-*-taskexecutor-*.out" in command
line , I get the following error : "Invalid maximum direct memory size:
-XX:MaxDirectMemorySize=8388607T
The specified size exceeds the maximum representable size.
Error: Could not create the Java Virtual Machine.
Hi Alexander,
broadly speaking, what you are doing right now, is in line with what is
currently possible with Apache Flink. Can you share a little bit more
information about your setup (K8s/Flink-Standalone?
Job-Mode/Session-Mode?)? You might find Gary's Flink Forward [1] talk
interesting. He demo
Hi,
Can someone please help me understand how does the exactly once semantic
work with Kafka 11 in Flink?
Thanks,
Harsh
On Tue, Sep 11, 2018 at 10:54 AM Harshvardhan Agrawal <
harshvardhan.ag...@gmail.com> wrote:
> Hi,
>
> I was going through the blog post on how TwoPhaseCommitSink function wor
There is a Pull Request to enable the new streaming sink for Hadoop < 2.7,
so it may become an option in the next release.
Thanks for bearing with us!
Best,
Stephan
On Sat, Sep 22, 2018 at 2:27 PM Paul Lam wrote:
>
> Hi Stephan!
>
> It's bad that I'm using Hadoop 2.6, so I have to stick to th
Hi Vino, and all,
I tried to avoid the step to get File Status, and found that the problem is
not there any more. I guess doing that with every single file out of 100K+
files on S3 caused some issue with checkpointing.
Still trying to find the cause, but with lower priority now.
Thanks for your h
Hi Harshvardhan,
In fact, Flink does not cache data between two checkpoints. In fact, Flink
only calls different operations at different points in time. These
operations are provided by the Kafka client, so you should have a deeper
understanding of the principles of Kafka producer transactions.
I
Hi,
According to the instructions in the script:
# Long.MAX_VALUE in TB: This is an upper bound, much less direct
memory will be used
TM_MAX_OFFHEAP_SIZE="8388607T"
I think you may need to confirm if your operating system and the JDK you
installed on the TM are 64-bit.
Thanks, vino.
Sarabjyo
Hi all ,
I am getting this error with flink 1.6.0 , please help me .
2018-09-23 07:15:08,846 ERROR
org.apache.kafka.clients.producer.KafkaProducer - Interrupted
while joining ioThread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at
We started to see same errors after upgrading to flink 1.6.0 from 1.4.2. We
have one JM and 5 TM on kubernetes. JM is running on HA mode. Taskmanagers
sometimes are loosing connection to JM and having following error like you
have.
*2018-09-19 12:36:40,687 INFO
org.apache.flink.runtime.taskexecut
What are you trying to do , can you share some code ?
This is the reason for the exeption
Proceeding to force close the producer since pending requests could not be
completed within timeout 9223372036854775807 ms.
On Mon, 24 Sep 2018, 9:23 yuvraj singh, <19yuvrajsing...@gmail.com> wrote:
> Hi a
10 matches
Mail list logo