Thank you Fabian for fixing that. The highlight of my day today.
On Nov 15, 2017 1:01 AM, "Fabian Hueske" wrote:
> Hi Colin,
>
> thanks for reporting the bug. I had a look at it and it seems that the
> wrong classloader is used when compiling the code (both for the batch as
> well as the stream
Gordon, Tony,
Thought I would chime in real quick as I have tested this a few different ways
in the last month (not sure if this will be helpful but thought I’d throw it
out there). I actually haven’t noticed issue auto committing with any of those
configs using Kafka property auto.offset.reset
Hi All,
ES6 is GA today, and I wonder if Flink-ES5 connector fully support ES6 ? Any
caveat we need to know ?
Thanks,
Fritz
I have DataStream, is there a way to convert it DataSet or table so that I
could sort it and persist it a file?
Thanks a lot!
Hi Guys
Is there any insight into this ?
Thanks
Mans
On Monday, November 13, 2017 11:19 AM, M Singh wrote:
Hi Flink Users
I have a few questions about triggers:
If a trigger returns TriggerResult.FIRE from say the onProcessingTime method -
the window computation is triggered but element
Hi Xingcan: Thanks for your response.
So to summarize - global windows can be applied to keyed and non keyed windows
- we only have to specify trigger with it to invoke the computation function.
Thanks
Mans
On Wednesday, November 15, 2017 5:43 AM, Xingcan Cui
wrote:
Hi Mans,
the "glob
Hi Aviad,
I had a similar situation and my solution was to use the flink
monitoring rest api (/jobs/{jobid}/checkpoints) to get the mapping
between job and checkpoint file.
Wrap this in a script and run periodically( in my case, it was 30 sec).
You can also configure each job with an external
Thanks Piotr, does Flink read/write to zookeeper every time it process a
record?
I thought only JM uses ZK to keep some meta level data, not sure why `it
depends on many things like state backend used, state size, complexity of
your application, size of the records, number of machines, their hardwa
Hi,
I have several jobs which configured for external check-pointing
(enableExternalizedCheckpoints)
how can I correlate between checkpoint and jobs.
for example, if i want to write script which monitor if the job is up or
not and if the job is down it will resume the job from the externalized
chec
Hi Mans,
the "global" here indicates the "horizontal" (count, time, etc.) dimension
instead of the "vertical" (keyBy) dimension, i.e., all the received data
will be placed into a single huge window. Actually, it's an orthogonal
concept with the *KeyBy* operations since both *DataStream* and
*Keyed
Hi,
if my job configured with *setRestartStrategy *and with *enableCheckpointing
*(&enableExternalizedCheckpoints).
it case of failure and job restart, does it restart with the latest
checkpoint or is it restarts without any checkpoint.
Hi,
I have been able to observe some off heap memory “issues” by submitting Kafka
job provided by Javier Lopez (in different mailing thread).
TL;DR;
There was no memory leak, just memory pool “Metaspace” and “Compressed Class
Space” are growing in size over time and are only rarely garbage co
Hi Adarsh,
we developed flink-JPMML for streaming model serving based on top of the
PMML format and of course Flink: we didn't release any official benchmark
numbers yet. We didn't bump into any performance issue along the library
employment. In terms of throughput and latency it doesn't require mo
Hi,
Unfortunately, I don't have anything to add. Yes, back pressure doesn't work
correctly for functions that do work outside the main thread and iterations
currently don't work well and can lead to deadlocks.
Did you already open issues for those by now?
Best,
Aljoscha
> On 10. Nov 2017, at
Hi Tony,
Thanks for the report. At first glance of the description, what you described
doesn’t seem to match the expected behavior.
I’ll spend some time later today to check this out.
Cheers,
Gordon
On 15 November 2017 at 5:08:34 PM, Tony Wei (tony19920...@gmail.com) wrote:
Hi Gordon,
When I
Hi Gordon,
When I used FlinkKafkaConsumer010 to consume log from Kafka, I found that
if I used `setStartFromLatest()` the kafka consumer api didn't auto commit
offsets back to consumer group, but if I used `setStartFromGroupOffsets()`
it worked fine.
I am sure that the configuration for Kafka has
Hi Colin,
thanks for reporting the bug. I had a look at it and it seems that the
wrong classloader is used when compiling the code (both for the batch as
well as the streaming queries).
I have a fix that I need to verify.
It's not necessary to open a new JIRA for that. We can cover all cases
unde
2. The data on wire as I have understood is encrypted by SSL. Is this
correct?
Thanks
On Wed, Nov 15, 2017 at 2:02 PM, Divansh Arora wrote:
> Hi
> I am Divansh.
>
> I have a few questions regarding security features in flink as we need to
> use flink like software in our product. Sorry in adva
Hi
I am Divansh.
I have a few questions regarding security features in flink as we need to
use flink like software in our product. Sorry in advance if I ask anything
stupid as I'm a newbie and nt
Questions:
1. Is there any way to take encrypted data in flink from client along with
required crede
19 matches
Mail list logo