You can use `env.java.opts.taskmanager` to specify java options for the task
managers specifically. Be aware you may want to set `suspend=n` or be sure to
attach your debugger promptly, otherwise the task manager may time out
attempting to connect to the job manager (since it’s waiting for you t
Hey Rainie,
Kafka internally attempts to retry topic metadata fetches if possible. If you
think the root cause was just due to network congestion or the like, you might
want to look into increasing `request.timeout.ms`. Because of the internal
retry attempts, however, this exception usually mea
Calcite does not follow ISO-8601. Instead, until very recently Calcite weeks
started on Thursdays[1].
(As an aside, Calcite somewhat abuses the WEEK time unit - converting a date to
a week returns an integer representing the week of the year the date falls in
while FLOORing or CEILing a timesta
Hey Andreas,
Have a read through
https://ci.apache.org/projects/flink/flink-docs-stable/dev/datastream_execution_mode.html#task-scheduling-and-network-shuffle
and in particular the BATCH Execution Mode section. Your intuition is mostly
correct – because your operators can’t be chained due to th
One thing to consider could be using a CoProcessFunction instead of a
BroadcastProcessFunction, and calling .broadcast on the input stream you want
every task manager to receive. Then you could follow the pattern you laid out
in your sample code (e.g. initialize state in the initializeState func
Martin,
You can use `.partitionCustom` and provide a partitioner if you want to control
explicitly how elements are distributed to downstream tasks.
From: Martin Frank Hansen
Reply-To: "m...@berlingskemedia.dk"
Date: Thursday, January 14, 2021 at 1:48 AM
To: user
Subject: Deterministic rescal
Hey Fanbin,
I’m not sure if TimeCharacteristic.IngestionTime works with Flink SQL, but if
you haven’t tried setting the stream time characteristic to ingestion time it’s
worth a shot. Otherwise, one possibility that comes to mind is to use a custom
TimestampAssigner to set the event time to the
I include the check
`(!bucketState.schema.equals(this.schema))`. Make sure that you’re actually
comparing schema fingerprints or the like instead of directly calling
schema.equals(otherSchema).
Julian
From: aj
Date: Friday, December 4, 2020 at 7:20 AM
To: Piotr Nowojski
Cc: "Jaf
I can't vouch for it personally, but perhaps the Apache Bahir Netty Source for
Flink could help you? It sounds like you want to use HTTPS, which this doesn't
support directly, but the source might be a helpful starting point to adding
the functionality you need.
On 12/3/20, 1:33 AM, "Chesnay S
One thing to check is how much you're serializing to the network. If you're
using Avro Generic records without special handling you can wind up serializing
the schema with every record, greatly increasing the amount of data you're
sending across the wire.
On 11/9/20, 8:14 AM, "ashwinkonale" w
Are you registering the protobuf serializer with Kryo? (See
https://flink.apache.org/news/2020/04/15/flink-serialization-tuning-vol-1.html#protobuf-via-kryo)
From: Sudan S
Date: Monday, October 26, 2020 at 11:44 AM
To: User-Flink
Subject: Getting UnsupportedException in Kyro for proto maps
Hi,
te a custom operator that will be doing the same
thing.
For the 2. and 3. I'm not entirely sure if there are some gotchas that I
haven't thought through (state handling?), so if you can make 1. work for you,
it will probably be a safer route.
Best,
Piotrek
śr., 14 paź 202
: Thursday, October 15, 2020 at 4:12 AM
To: "Jaffe, Julian"
Cc: Piotr Nowojski , user
Subject: Re: Broadcasting control messages to a sink
Hi Jaffe,
I am also working on something similar type of a problem.
I am receiving a set of events in Avro format on different topics. I want to
con
meditate on the docs further 🙂
Julian
From: Piotr Nowojski
Date: Wednesday, October 14, 2020 at 6:35 AM
To: "Jaffe, Julian"
Cc: "user@flink.apache.org"
Subject: Re: Broadcasting control messages to a sink
Hi Julian,
Have you seen Broadcast State [1]? I have never used i
Hey all,
I’m building a Flink app that pulls in messages from a Kafka topic and writes
them out to disk using a custom bucketed sink. Each message needs to be parsed
using a schema that is also needed when writing in the sink. This schema is
read from a remote file on a distributed file system
15 matches
Mail list logo