And here is the deserde block where the Schema is used to generate a
GenericRecord:
@Override
public Map deserialize(byte[] bytes) throws IOException {
DatumReader reader = new
GenericDatumReader(schema);
GenericRecord record = reader.read(null,
DecoderFactory.get().binaryDecoder(bytes, nul
Hey guys,
I have been trying to get avro deserialization to work, but I’ve run into the
issue where flink (1.10) is trying to serialize the avro classes with kryo:
com.esotericsoftware.kryo.KryoException: java.lang.UnsupportedOperationException
Serialization trace:
reserved (org.apache.avro.Sche
Since it is necessary to use cancel with save point/resume from save point,
then it is not possible to use Deployment (otherwise JobManager pod will
restart on crash from same save point), so we need to use Job, but in that case
if Job pod is crashed who will start new instance of Job pod ? Soun
Hi Prasanna,
1) Semantically both a) and b) would be Ok. If the Custom sink could be
chained with the map operator (I assume the map operator is the "Processing" in
the graph), there should be also no much difference physically, if they could
not chain, then writting a custom sink would caus
Thanks Yun,
I make it work, but now I want to set appropriate config programmatically.
I can set state.checkpointing.dir by:
val fsStateBackend = new FsStateBackend(new
URI("wasb://@$.blob.core.windows.net/"))
env.setStateBackend(fsStateBackend.asInstanceOf[StateBackend])
But, I can’t update con
Hi Flink community,
I have two questions regarding the Ververica Flink Training resources.
1. In the official Flink documentation, the hyperlinks to the github sites
for the exercises in the "Learn Flink" section are not working. If
possible, please provide me with the correct links for the exerc
Thank you, Arvid and Marco!
On Sat, Aug 22, 2020 at 2:03 PM Marco Villalobos
wrote:
> Hi Pankaj,
>
> I highly recommend that you use an OpenJDK version 11 because each JDK
> upgrade has a performance improvement, and also because the Oracle JDK and
> OpenJDK are based off the same code-base. The
Hi Pankaj,
I highly recommend that you use an OpenJDK version 11 because each JDK upgrade
has a performance improvement, and also because the Oracle JDK and OpenJDK are
based off the same code-base. The main difference between Oracle and OpenJDK is
the branding and price.
> On Aug 22, 2020, a
Hi Pankaj,
Yes, you can use all 4 mentioned JDKs for these three things.
When building Flink from sources with Java 11, you may need to activate the
java 11 profile in maven (-Pjava11). If you just want to use Flink with
Java 11, you can also use Flink built with Java 8 (in fact the official
mave
Hello,
The documentation says that to run Flink, we need Java 8 or 11.
Will JDK 11 work for running Flink, programming Flink applications as well
as building Flink from source?
Also, can we use Open JDK for the above three capabilities, or do any of
the capabilities require Oracle JDK?
Thanks,
If, and only if, the cluster-id and JobId are identical then the
JobGraph will be recovered from ZooKeeper.
On 22/08/2020 06:12, Alexey Trenikhun wrote:
Not sure I that I understand your statement about "the HaServices are
only being given the JobGraph", seems
HighAvailabilityServices#getJobGr
11 matches
Mail list logo