Despite my Flink jobs working, I am unable to get numbers to show up in the
dashboard for Bytes/Records Received/Sent after upgrading from 1.12.7 to
1.17.1. I have looked for what the problem might be but do not see anything
obvious. I am using Flink with the Pulsar client. I can see metrics und
Hi,
I am attempting to upgrade from 1.12.7 to 1.15.0. One of the issues I am
encountering is the following exception when attempting to submit a job from
the command line:
switched from INITIALIZING to FAILED with failure cause:
org.apache.pulsar.client.admin.PulsarAdminException$NotFoundExcep
I am attempting to migrate from 1.7.1 to 1.9.1 and I have hit a problem where
previously working jobs can no longer launch after being submitted. In the UI,
the submitted jobs show up as deploying for a period, then go into a run state
before returning to the deploy state and this repeats regula
serializer, kafkaProperties)
...
class MessageDeserializer extends KafkaDeserializationSchema[GenericRecord] {
| |
On Thu, Jan 23, 2020 at 1:20 AM Jason Kania wrote:
Hello,
I was looking for documentation in 1.9.1 on how to create implementations of
the KafkaSerialization
Hello,
I was looking for documentation in 1.9.1 on how to create implementations of
the KafkaSerializationSchema and KafkaDeserializationSchema interfaces. I have
created implementations in the past for the SerializationSchema and
DeserializationSchema interface. Unfortunately, I can find no exa
th "Received request", so we
can figure out whether the request at least arrives.
On 05.09.2018 00:53, Jason Kania wrote:
I have upgraded from Flink 1.4.0 to Flink 1.5.3 with a three node cluster
configured with HA. Now I am encountering an issue where the flink command line
I have upgraded from Flink 1.4.0 to Flink 1.5.3 with a three node cluster
configured with HA. Now I am encountering an issue where the flink command line
operations timeout. The exception generated is very poor because it only
indicates a timeout and not the reason or what it was trying to do:
>
,
Jason
On Tuesday, May 15, 2018, 9:59:58 a.m. EDT, Timo Walther
wrote:
Can you change the log level to DEBUG and share the logs with us? Maybe Till
(in CC) has some idea?
Regards,
Timo
Am 15.05.18 um 15:18 schrieb Jason Kania:
Hi Timo,
Thanks for the response
rote:
Hi Jason,
this sounds more like a network connection/firewall issue to me. Can you tell
us a bit more about your environment? Are you running your Flink cluster on a
cloud provider?
Regards,
Timo
Am 15.05.18 um 05:15 schrieb Jason Kania:
Hi,
I am using the 1.4.2 releas
Hi,
I am using the 1.4.2 release on ubuntu and attempting to make use of an HA Job
Manager, but unfortunately using HA functionality prevents job submission with
the following error:
java.lang.RuntimeException: Failed to retrieve JobManager address
at
org.apache.flink.client.program.Cl
Hi,
I have a question that I have not resolved via the documentation, looking in
the "Parallel Execution", "Streaming" and the "Connectors" sections. If I
retrieve a kafka stream and then call the process function against it in
parallel, as follows, does it consume in some round robin fashion b
Thanks. That resolved it. Also had to pull in the kafka 10 and 9 versions of
the connector jars. Once the base jar is in the mvn repository, this won't be
as problematic.
On Friday, January 12, 2018, 9:46:22 AM EST, Tzu-Li (Gordon) Tai
wrote:
Hi Jason,
The KeyedDeserializationSchema
Hello,
I am just getting started with Flink and am attempting to use the kafka
connector.
In particular I am attempting to use the jar
flink-connector-kafka-0.11_2.11-1.4.0.jar downloaded from:
https://repo1.maven.org/maven2/org/apache/flink/flink-connector-kafka-0.11_2.11/1.4.0/
with the latest
13 matches
Mail list logo