Similar to agent Biao, Application mode is okay if you only have a single
app, but when running multiple apps session mode is better for control. In
my experience, the CLIFrontend is not as robust as the REST API, or you
will end up having to rebuild a very similar Rest API. For the meta space
issu
Could you not use the JM web address to utilize the rest api? You can
start/stop/save point/restore + upload new jars via the rest api. While I
did not run on ECS( ran on EMR) I was able to use the rest api to do
deployments.
On Sun, Jan 3, 2021 at 19:09 Navneeth Krishnan
wrote:
> Hi All,
>
> Cu
I have defined a streaming file sink for parquet to store my scala case
class.
StreamingFileSink
.*forBulkFormat(*
new Path*(*appArgs.datalakeBucket*)*,
ParquetAvroWriters
.*forReflectRecord(classOf[*Log*])*
* )*
.withBucketAssigner*(*new TransactionLogHiveBucketAssigner*(
;s a bug about classloader used in `abortTransaction()` method in
> `FlinkKafkaProducer`, Flink version 1.10.0. I think it has been fixed in
> 1.10.1 and 1.11 according to FLINK-16262. Are you using Flink version
> 1.10.0?
>
>
> Vikash Dat 于2020年7月30日周四 下午9:26写道:
>
>> Has anyone
Has anyone had success with using exactly_once in a kafka producer in flink?
As of right now I don't think the code shown in the docs
(https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#kafka-producer)
actually works.
--
Sent from: http://apache-flink-user-mailing-l
I'm using Flink 1.10 and Kafka (AWS MSK) 2.2 and trying to do a simple app
that consumes from one kafka topic and produces events into another topic.
I would like to utilize the exactly_once semantic, however, I am
experiencing the following error:
org.apache.kafka.common.KafkaException:
org.apach
yarn will assign a random port when flink is deployed. To get the port you
need to do a yarn application -list and see the tracking url assigned to
your flink cluster. The port in that url will be the port you need to use
for the rest api.
On Tue, Jun 16, 2020 at 08:49 aj wrote:
> Ok, thanks for