Hi Ardhani,
Thanks for your kindly reply.
Our team use your provided metrics before, but the metrics disappear after
migrate to new KafkaSource.
We initialize KafkaSource in following code.
```
val consumer: KafkaSource[T] = KafkaSource.builder()
.setProperties(properties)
.setTopics(topic)
Hi Natu,
the message is of course a bit unfortunate and misleading, but the issue
here isn't that multiple connectors are found, but that none are found. The
repository you linked to implements a connector using the old connector
stack, but Ververica Platform only supports the new stack, see [1].
Hi Team,
I am trying to find a way to ship files from aws s3 for a flink streaming job,
I am running on AWS EMR. What i need to ship are following:
1) application jar
2) application property file
3) custom flink-conf.yaml
4) log4j application specific
Please let me know options.
Thanks,
Vijay
I am running with one job manager and three task managers.
Each task manager is receiving at most 8 gb of data, but the job is timing
out.
What parameters must I adjust?
Sink: back fill db sink) (15/32) (50626268d1f0d4c0833c5fa548863abd)
switched from SCHEDULED to FAILED on [unassigned resource]
Hi Guowei:
Thanks for your help, it solves my problem.
Best regards
At 2021-05-24 15:15:57, "Guowei Ma" wrote:
Hi
I think you are right that the metrics are reset after the job restart. It is
because the metrics are only stored in the memory.
I think you could store the metrics
Jacky is right. It's a known issue and will be fixed in FLINK-21511.
Best,
Yangze Guo
On Tue, May 25, 2021 at 8:40 AM Jacky Yin 殷传旺 wrote:
>
> If you are using es connector 6.*, actually there is a deadlock bug if the
> backoff is enabled. The 'retry' and 'flush' share one thread pool which has
If you are using es connector 6.*, actually there is a deadlock bug if the
backoff is enabled. The 'retry' and 'flush' share one thread pool which has
only one thread. Sometimes the one holding the thread tries to get the
semaphore which is hold by the one who tries to get the thread. Therefore
Hi,
I am running Flink v1.12.2 in Standalone mode on Kubernetes. I set Kubernetes
native as HA.
The HA works well when either jobmanager or taskmanager pod lost or crashes.
But, when I restart master node, jobmanager pod will always crash and restart.
This results in the entire Flink cluster r
Hello list,
we are manually building TypedValue instances to be sent to a python
remote function (with a reqreply function builder). We create the typed
value as follows (in Kotlin):
override fun map(value: Tuple2): TypedValue {
return TypedValue.newBuilder()
.setValue(get
Use below respectively
flink_taskmanager_job_task_operator_KafkaConsumer_bytes_consumed_rate -
Consumer rate
flink_taskmanager_job_task_operator_KafkaConsumer_records_lag_max -
Consumer lag
flink_taskmanager_job_task_operator_KafkaConsumer_commit_latency_max -
commit latency
unsure if reactive mo
Got it! thanks for helping.
On Thu, May 20, 2021 at 7:15 PM Yangze Guo wrote:
> > So, ES BulkProcessor retried after bulk request was partially rejected.
> And eventually that request was sent successfully? That is why failure
> handler was not called?
>
> If the bulk request fails after the max
Hello,
Our team tries to test reactive mode and replace FlinkKafkaConsumer with
the new KafkaSource.
But we can’t find the KafkaSource metrics list. Does anyone have any idea?
In our case, we want to know the Kafka consume delay and consume rate.
Thanks,
Oscar
Hello,
I saw that flink 1.12.4 just got released. However I am struggling to find
the docker image.
I checked both:
- https://hub.docker.com/_/flink
- https://hub.docker.com/r/apache/flink
but on both 1.12.4 is not available.
Are there plans to publish it as a docker image?
Regards
,
Nikola
Hi community,
Flink 1.13 introduces toDataStream. However, I wonder when do we prefer
toDataStream over toAppendStream or toRetractStream?
Thank you!
Best,
Yik San
Hi,
Would you like to share your code? It is very helpful to verify the
problem.
I think you could use the `JoinedStream.with().uid(xxx)` to set the
name/UID .
Best,
Guowei
On Mon, May 24, 2021 at 2:36 PM Marco Villalobos
wrote:
> Hi,
>
> Stream one has one element.
> Stream two has 2 el
Hi Guowei,
Thanks for pointing that out! It helps me resolve the issue.
Just a small correction: `static` identifier is not available in Scala. Its
Scala alternative is `object`.
```scala
object BaseJob {
final val LOG = LoggerFactory.getLogger(getClass)
}
```
Then referencing the LOG objec
Hi, Yik San
You need to change the following line:
protected final val LOG = LoggerFactory.getLogger(getClass)
protected *static* final val LOG = LoggerFactory.getLogger(getClass)
Best,
Guowei
On Mon, May 24, 2021 at 2:41 PM Yik San Chan
wrote:
> Hi community,
>
> I have a job that cons
Hi
I think you are right that the metrics are reset after the job restart. It
is because the metrics are only stored in the memory.
I think you could store the metrics to the Flink's state[1], which could be
restored after the job restarted.
[1]
https://ci.apache.org/projects/flink/flink-docs-rele
18 matches
Mail list logo