Hi Vijay,
Your steps look correct to me.
Perhaps you can double check that the graphite port you are sending is
correct? THe default carbon port is 2003 and if you use the aggregator it
is 2023.
You should be able to see in both flink jobmanager and taskmanager that the
metrics have been initiali
Hi Vishwas,
According to the log, heap space is 13+GB, which looks fine.
Several reason might lead to the heap space OOM:
- Memory leak
- Not enough GC threads
- Concurrent GC starts too late
- ...
I would suggest taking a look at the GC logs.
Thank you~
Xintong Song
On Fri, Au
Hi Team,
I am trying to export Flink stream default metrics using Graphite, but I
can't find it in the Graphite metrics console. Could you confirm the steps
below are correct?
*1) Updated flink-conf.yaml*
metrics.reporter.grph.factory.class:
org.apache.flink.metrics.graphite.GraphiteReporterFa
Thank You Till. I had an old hadoop version dependency in one of the
dependent jars causing conflict.
On Fri, Aug 21, 2020 at 12:24 AM Till Rohrmann wrote:
> Hi Vijay,
>
> could you move the s3 filesystem
> dependency lib/flink-s3-fs-hadoop-1.10.0.jar into the plugin directory? See
> this link
Hi David
1. Thank you for fixing the links!
2. I downloaded the repo and data files in the middle of the rewriting, so
the schema mentioned in the repo did not match the files. The new exercises
are running well but I could not adjust the servingspeedfactor to speed up
the serving of data events.
If you do did not specify a different job or cluster id, then yes it
will read the graph from Zookeeper.
Differentiating different submissions is the very purpose of job ids.
On 23/08/2020 16:38, Alexey Trenikhun wrote:
Let’s say HA is enabled, so this part works. Now we want to upgrade
job jar
Let’s say HA is enabled, so this part works. Now we want to upgrade job jar, we
stop job with save point sp2, change manifest to specify “-s sp2” and newer
image, and create K8s job again, on start will HAServices still read job graph
from Zookeeper?
From: Chesn
Thanks for the Reply Yun,
I see that when I publish the messages to SNS from map operator, in case of
any errors I find the checkpointing mechanism takes care of "no data loss".
One scenario I could not replicate is that, the method from SDK unable to
send messages to SNS but remains silent not t
If HA is enabled the the cluster will continue from the latest
externalized checkpoint.
Without HA it still start from the savepoint.
On 23/08/2020 16:18, Alexey Trenikhun wrote:
Let’s say job cluster was submitted as job from save point sp1, so
spec includes “-s sp1”, job run for days, takin
Let’s say job cluster was submitted as job from save point sp1, so spec
includes “-s sp1”, job run for days, takin externalized checkpoints every 5
minute, then suddenly pod failed, Kubernetes job controller restarts job pod
using original job spec, which has “-s sp1”, so Flink job will start f
Piper,
1. Thanks for reporting the problem with the broken links. I've just fixed
this.
2. The exercises were recently rewritten so that they no longer use the old
file-based datasets. Now they use data generators that are included in the
project. As part of this update, the schema was modified s
Hi,
I'm trying to load a FlinkKafkaProducer sink alongside another custom sink.
While trying to restore
a running Flink app from the previous state, I get the error message below.
I am running Flink 1.9.0 with the following SBT dependency added:
"org.apache.flink" %% "flink-connector-kafka" % 1.9
A job cluster is submitted as a job, not a deployment.
The built-in Job controller of Kubernetes ensures that this job finishes
successfully, and if required starts new pods.
On 23/08/2020 06:43, Alexey Trenikhun wrote:
Since it is necessary to use cancel with save point/resume from save
po
I figured it out for those interested; I actually had an embedded report in my
avro schema, so my loop was incorrectly building a single dimension map with a
GenericRecord value, which was throwing off the map’s serialization. After
recursing the embedded GenericRecords to build the fully realiz
14 matches
Mail list logo