Re: Not able to force Avro serialization

2020-08-22 Thread Slotterback, Chris
And here is the deserde block where the Schema is used to generate a GenericRecord: @Override public Map deserialize(byte[] bytes) throws IOException { DatumReader reader = new GenericDatumReader(schema); GenericRecord record = reader.read(null, DecoderFactory.get().binaryDecoder(bytes, nul

Not able to force Avro serialization

2020-08-22 Thread Slotterback, Chris
Hey guys, I have been trying to get avro deserialization to work, but I’ve run into the issue where flink (1.10) is trying to serialize the avro classes with kryo: com.esotericsoftware.kryo.KryoException: java.lang.UnsupportedOperationException Serialization trace: reserved (org.apache.avro.Sche

Re: Flink Job cluster in HA mode - recovery vs upgrade

2020-08-22 Thread Alexey Trenikhun
Since it is necessary to use cancel with save point/resume from save point, then it is not possible to use Deployment (otherwise JobManager pod will restart on crash from same save point), so we need to use Job, but in that case if Job pod is crashed who will start new instance of Job pod ? Soun

Re: SDK vs Connectors

2020-08-22 Thread Yun Gao
Hi Prasanna, 1) Semantically both a) and b) would be Ok. If the Custom sink could be chained with the map operator (I assume the map operator is the "Processing" in the graph), there should be also no much difference physically, if they could not chain, then writting a custom sink would caus

Re: Flink checkpointing with Azure block storage

2020-08-22 Thread Boris Lublinsky
Thanks Yun, I make it work, but now I want to set appropriate config programmatically. I can set state.checkpointing.dir by: val fsStateBackend = new FsStateBackend(new URI("wasb://@$.blob.core.windows.net/")) env.setStateBackend(fsStateBackend.asInstanceOf[StateBackend]) But, I can’t update con

Ververica Flink training resources

2020-08-22 Thread Piper Piper
Hi Flink community, I have two questions regarding the Ververica Flink Training resources. 1. In the official Flink documentation, the hyperlinks to the github sites for the exercises in the "Learn Flink" section are not working. If possible, please provide me with the correct links for the exerc

Re: OpenJDK or Oracle JDK: 8 or 11?

2020-08-22 Thread Pankaj Chand
Thank you, Arvid and Marco! On Sat, Aug 22, 2020 at 2:03 PM Marco Villalobos wrote: > Hi Pankaj, > > I highly recommend that you use an OpenJDK version 11 because each JDK > upgrade has a performance improvement, and also because the Oracle JDK and > OpenJDK are based off the same code-base. The

Re: OpenJDK or Oracle JDK: 8 or 11?

2020-08-22 Thread Marco Villalobos
Hi Pankaj, I highly recommend that you use an OpenJDK version 11 because each JDK upgrade has a performance improvement, and also because the Oracle JDK and OpenJDK are based off the same code-base. The main difference between Oracle and OpenJDK is the branding and price. > On Aug 22, 2020, a

Re: OpenJDK or Oracle JDK: 8 or 11?

2020-08-22 Thread Arvid Heise
Hi Pankaj, Yes, you can use all 4 mentioned JDKs for these three things. When building Flink from sources with Java 11, you may need to activate the java 11 profile in maven (-Pjava11). If you just want to use Flink with Java 11, you can also use Flink built with Java 8 (in fact the official mave

OpenJDK or Oracle JDK: 8 or 11?

2020-08-22 Thread Pankaj Chand
Hello, The documentation says that to run Flink, we need Java 8 or 11. Will JDK 11 work for running Flink, programming Flink applications as well as building Flink from source? Also, can we use Open JDK for the above three capabilities, or do any of the capabilities require Oracle JDK? Thanks,

Re: Flink Job cluster in HA mode - recovery vs upgrade

2020-08-22 Thread Chesnay Schepler
If, and only if, the cluster-id and JobId are identical then the JobGraph will be recovered from ZooKeeper. On 22/08/2020 06:12, Alexey Trenikhun wrote: Not sure I that I understand your statement about "the HaServices are only being given the JobGraph", seems HighAvailabilityServices#getJobGr