Using Flink 1.2.

Hi all, I have a question about Batch processing and Sinks.  I have a Flink job 
that parses a User Request log file that contains performance data per request. 
It accumulates metric data values into 15 minute time windows.  Each log line 
is mapped to multiple records, so that each of the metric value can be 'billed' 
against a few different categories (User, Organisation, Project, Action).  The 
goal of the Flink job is to distil the volume of request data into more 
manageable summaries.  We can then see breakdowns of, say, User CPU utilised by 
distinct categories (e.g. Users) and allow us to look for trends/outliers.

It is working very well as a batch, but with one unexpected behaviour.

What I can't understand is that the Sink does not appear to get _any_ records 
until the rest of the chain has completed over the entire file.  I've played 
around with parallelism (1->3), and also verified with logging that the Sink 
isn't seeing any data until the entire previous chain is complete.  Is this 
expected behaviour? I was thinking that as each Time Window passed a 'block' of 
the results would be emitted to the sink.  Since we use Event Time 
characteristics, the batch job ought to emit these chunks as each 15 minute 
segment passes?

You can see the base job here, with an example of a single log line with the 
metrics here:

https://gist.github.com/tallpsmith/a2e5212547fb3c7220b0e49846d2f152

Each Category I've called an 'identifierType' (User, Organisation..), with the 
'identifier' the value of that (a UserId for example).  I key the stream by 
this pair of records and then Fold the records by the Time window summing each 
metric type's value up.  I have yet to work out the proper use case for Fold 
versus Reduce, I may have got that wrong, but I can't see how that changes the 
flow here.

The output is a beautiful rolled up summary by 15 minutes by each 
identifierType & identifier.  I have yet to attempt a live Streaming version of 
this, but had thought the Batch version would also be concurrent and start 
emitting 15 minute windows as soon as the stream chunks transitions into the 
next window.   Given the entire job takes about 5 minutes on my laptop for 8 
million raw source lines whose data is spread out over an entire day, I would 
have thought there's some part of that 5 minutes that would be emitting chunks 
of summary data to the sink?  But nothing turns up until the entire job is done.

Maybe the data is just too small.. Maybe there's buffering going on somewhere 
in the chain. ?

Any pointers would be appreciated in understanding the flow here.

Cheers,

Paul Smith


Reply via email to