In Avro 1.8.2 (which is close to being released) we built a new feature
that allows serializing Avro records into a byte array that still contains
the schema information.
So even if you keep messages for weeks and the schema of your application
changes you can still read the messages at the consume
Hi,
yes, Flink can read and write Avro schema to Kafka, using a custom
serialization / deser schema.
On Fri, Nov 11, 2016 at 6:05 AM, daviD wrote:
> Hi All,
>
> Does anyone know if Flink can read and write Avro schema to Kafka?
>
> Thanks
>
> daviD
>
Hi,
the problem is that Flink's YARN code is not available in the Hadoop 1.2.1
build.
How do you try to execute the Flink job to trigger this error message?
On Fri, Nov 11, 2016 at 12:23 PM, PACE, JAMES wrote:
> I am running Apache Flink 1.1.3 – Hadoop version 1.2.1 with the NiFi
> connector.
bash> git Clone
POM> 1.8
bash> java -version ==> java version "1.8.0_111"
*Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile
(default-compile) on project flink-core: Compilation failure[ERROR] . . .
/flink1.2
_10_Nov/flink-core/src/main/java/org/apache/flink/api/ja
Hi,
What happened is that I compiled Flink with the wrong hadoop version...
Sorry :)
Gyula
Gyula Fóra ezt írta (időpont: 2016. nov. 12., Szo,
13:11):
> Hi,
>
> I am running into some strange issues on yarn with Flink 1.1.3 & 4. For
> some reason I started getting this error (see under text.)
>
Hi Aljoscha,
Yes, the same user ID can originate from different sources. You are right,
it would not be possible to guarantee ordering if you consider user IDs
cross the sources. However, when you key by source ID we isolate all the
user IDs within each source ID. So I believe it should be fine.
Hi,
I am running into some strange issues on yarn with Flink 1.1.3 & 4. For
some reason I started getting this error (see under text.)
The job manager starts and the application is in Accepted state but cannot
seem to be able to communicate with the scheduler. (0.0.0.0:8030 seems
strange)
I didn'
Hi everybody,
I found a new problem. The algorithm I want to implement needs a global
ReducingState. What do I mean with that:
I want to calculate a local aggregation for each task and then combine all
these local aggregates to one global aggregate and push this global
aggregate to all nodes and c
Hi,
I get a ton of these messages in my Job Manager's logfile. This makes Flink
unstable, as I cannot list or cancel/stop the jobs.
I run Flink in YARN under a default Horton HDP 2.5 installation. HDP sets
the hard and soft limit of open files to 32768 for the user "yarn" that
runs the Flink JVMs,