Flink uses kryo serialization which doesn't support joda time object
serialization.
Use java.util.date or you have to change kryo.
Thanks,
Arpit
On Tue, May 31, 2016 at 11:18 PM, ahmad Sa P wrote:
> Hi
> I have a problem at running a sample code from the hands-in examples of
> Apache Flink,
>
Hello,
How many memory your yarn containers are configured to have ? This error
may be due to running a flink on yarn cluster with more memory than you
have in containers. Could you check it, and maybe set containers memory to
a more suitable value ?
regards
2016-06-01 1:22 GMT+02:00 prateekaror
Hi
I am running flink 1.0.2 with Yarn .
After running application for some time , Yarn kill my container due to
running beyond physical memory limits .
how can i debug memory issue ?
below are the logs :
Container container_1463184272818_0165_01_12 is completed with
diagnostics: Container
Thanks, things are clear so far.
Thanks, altering via pause/update/resume is OK, at least for now. Will try
it on practice.
Just in case - question was inspired by Apache NiFi. If you haven't seen
this https://www.youtube.com/watch?v=sQCgtCoZyFQ - at 29:10.
I would say such thing is a must have feature in "production" where
stopp
Hi Jordan,
the community is definitively open to discuss this further (in particular
if users start asking for the feature)
Here is the related JIRA issue:
https://issues.apache.org/jira/browse/FLINK-2313
On Tue, May 31, 2016 at 5:19 PM, jganoff wrote:
> Hi Robert,
>
> Thanks for the suggestion
Hi
I have a problem at running a sample code from the hands-in examples of
Apache Flink,
I used the following code to send output of a stream to already running
Apache Kafka, and get the below error. Could anyone tell me what is going
wrong?
Best regards
Ahmad
public class RideCleansing {
p
Hi Robert,
Thanks for the suggestion. Threading out a blocking
RemoteStreamEnvironment.execute() call and polling the monitoring REST API
will work for now. Once the job transitions to running I will kill the
thread and monitor the job through the REST API.
As for metrics, accumulators, and other
Hi Aljoscha,
thanks for the speedy reply.
I am processing measurements delivered by smart meters. I use windows to
gather measurements and calculate values such as average consumption. The key
is simply the meter ID.
The challenge is that meters may undergo network partitioning, under which
t
Hi,
I'm afraid this is impossible with the current design of Flink. Might I ask
what you want to achieve with this? Maybe we can come up with a solution.
-Aljoscha
On Tue, 31 May 2016 at 13:24 wrote:
> My use case primarily concerns applying transformations per key, with the
> keys remaining fi
My use case primarily concerns applying transformations per key, with the
keys remaining fixed throughout the topology. I am using event time for my
windows.
The problem i am currently facing is that watermarks in windows propagate per
operator instance, meaning the operator event time increase
I test streaming to read data through JDBCInputFormat with code snippet as
below (scala 2.11, flink 1.0)
val input = JDBCInputFormat.buildJDBCInputFormat.
setDrivername(driver).
setDBUrl(url).
setQuery(sql).
setUsernam
Hey, currently this is not possible. You can use savepoints
(https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/streaming/savepoints.html)
to stop the job and then resume with the altered job version. There
are plans to allow dynamic rescaling of the execution graph, but I
think they
Aljoscha is working to properly expose this in Flink. The design
document is here:
https://docs.google.com/document/d/1hIgxi2Zchww_5fWUHLoYiXwSBXjv-M5eOv-MKQYN3m4/edit#heading=h.pqg5z6g0mjm7
On Mon, May 30, 2016 at 2:31 PM, Philippe CAPARROY
wrote:
>
> Just transform the list in a DataStream. A d
Added a note to the logging section of the docs. Website should be
updated with the nightly build.
On Mon, May 30, 2016 at 7:56 PM, Stephan Ewen wrote:
> I think "log4j.properties" is also used for YARN (it is included in the
> shipped bundle, together with jars).
>
> Otherwise it is correct.
>
>
Hello,
Yes I ran it from the cli successfully. :-)
Regards
On Tue, May 31, 2016 at 11:03 AM, Stephan Ewen wrote:
> Hi!
>
> Concerning the "the program aborted pre-maturely" exception - I assume you
> were using the web dashboard to submit the program.
> There is a trick that we use to fetch th
Hi!
Concerning the "the program aborted pre-maturely" exception - I assume you
were using the web dashboard to submit the program.
There is a trick that we use to fetch the plan without executing the
program, but it can be voided by catching certain exceptions.
I would try to simply execute the p
Actually I have worked on IntelliJ only. As you said I also suspect the fat
jar (created from intelliJ) is unable to have the link to the native
library when I generate it and also the maven plugin is present in the
pom.xml . I also used another fat jar creator application to create the fat
jar and
18 matches
Mail list logo