Found the solution to the follow-up question:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/setup/config.html#metrics
On Thu, Sep 1, 2016 at 3:46 PM, Jack Huang wrote:
> Hi Greg,
>
> Following your hint, I found the solution here (
> https://issues.apache.org/jira/
their names starting with the host ip address?
Thanks,
Jack
On Thu, Sep 1, 2016 at 3:04 PM, Greg Hogan wrote:
> Have you copied the required jar files into your lib/ directory? Only JMX
> support is provided in the distribution.
>
> On Thu, Sep 1, 2016 at 5:07 PM, Jack Huang wrote:
Hi all,
I followed the instruction for reporting metrics to a Graphite server on
the official document (
https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/metrics.html#metric-types
).
Specifically, I have the following config/code in my project
metrics.reporters: graphite
metrics
; and chain it with a flatMap operator where you can use your custom
> deserializer and handle deserialization errors.
>
> Best,
> Yassine
>
> On Aug 27, 2016 02:37, "Jack Huang" wrote:
>
>> Hi all,
>>
>> I have a custom deserializer which I pass to a
or nullable fields?
>
> Stephan
>
>
> On Mon, Aug 29, 2016 at 8:04 PM, Jack Huang wrote:
>
>> Hi all,
>>
>> It seems like flink does not allow passing case class objects with
>> null-valued fields to the next operators. I am getting the following error
>>
Hi all,
It seems like flink does not allow passing case class objects with
null-valued fields to the next operators. I am getting the following error
message:
*Caused by: java.lang.RuntimeException: Could not forward element to
next operator*
at
org.apache.flink.streaming.runtime.task
Hi all,
I have a custom deserializer which I pass to a Kafka source to transform
JSON string to Scala case class.
val events = env.addSource(new FlinkKafkaConsumer09[Event]("events",
new JsonSerde(classOf[Event], new Event), kafkaProp))
There are time when the JSON message is malformed, in wh
window function.
>
> Cheers,
> Till
>
> On Wed, Aug 17, 2016 at 3:21 AM, Jack Huang wrote:
>
>> Hi all,
>>
>> I want to window a series of events using SessionWindow and use fold
>> function to incrementally aggregate the result.
>>
>>
Hi all,
I want to window a series of events using SessionWindow and use fold
function to incrementally aggregate the result.
events
.keyBy(_.id)
.window(EventTimeSessionWindows.withGap(Time.minutes(1)))
.fold(new Session)(eventFolder)
However I get
java.lang.UnsupportedOperationEx
tributing them).
>
> Making a Scala 'val' a 'lazy val' often does the trick (at minimal
> performance cost).
>
> On Thu, Aug 4, 2016 at 3:56 AM, Jack Huang wrote:
>
>> Hi all,
>>
>> I want to read a source of JSON String as Scala Case Cla
Hi all,
I want to read a source of JSON String as Scala Case Class. I don't want to
have to write a serde for every case class I have. The idea is:
val events = env.addSource(new FlinkKafkaConsumer09[Event]("events",
new JsonSerde(classOf[Event]), kafkaProp))
I was implementing my own JsonSer
Hi Max,
Changing yarn-heap-cutoff-ratio works seem to suffice for the time being.
Thanks for your help.
Regards,
Jack
On Tue, Aug 2, 2016 at 11:11 AM, Jack Huang wrote:
> Hi Max,
>
> Is there a way to limit the JVM memory usage (something like the -Xmx
> flag) for the task manage
t;
>
> >
> >
> >
> >
> > On Fri, Jul 29, 2016 at 11:19 AM, Jack Huang wrote:
> >>
> >> Hi Max,
> >>
> >> Each events are only a few hundred bytes. I am reading from a Kafka
> topic
> >> from some offset in the past, so th
Hi all,
I am running a test Flink streaming task under YARN. It reads messages from
a Kafka topic and writes them to local file system.
object PricerEvent {
def main(args:Array[String]) {
val kafkaProp = new Properties()
kafkaProp.setProperty("bootstrap.servers", "localhost:66
Hi all,
I have an incoming stream of event objects, each with its session ID. I am
writing a task that aggregate the events by session. The general logics
looks like
case class Event(sessionId:Int, data:String)case class Session(id:Int,
var events:List[Event])
val events = ... //some source
event
;auto.offset.reset", "earliest")
env.addSource(new FlinkKafkaConsumer09[String](input, new
SimpleStringSchema, kafkaProp))
.print
I thought *auto.offset.reset* is going to do the trick. What am I missing
here?
Thanks,
Jack Huang
ob manager automatically restarts the job under the
same job ID
7. Observe from the output that the states are restored
Jack
Jack Huang
On Thu, Apr 21, 2016 at 1:40 AM, Aljoscha Krettek
wrote:
> Hi,
> yes Stefano is spot on! The state is only restored if a job is restarted
> because
ount
> }
> def restoreState(state: Integer) {
> count = state
> }
> }
Thanks,
Jack Huang
On Wed, Apr 20, 2016 at 5:36 AM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:
> My bad, thanks for pointing that out.
>
> On Wed, Apr 20, 2016 at 1:49 PM, Alj
yBy({s => s})
>
>
>
> *.mapWithState((in:String, count:Option[Int]) => {val newCount =
> count.getOrElse(0) + 1((in, newCount), Some(newCount))})*
> .print()
Thanks,
Jack Huang
19 matches
Mail list logo