ally fresh environment, so until I
start practicing how to save point + restore a 1.4.2 -> 1.5.0 job, I look to be
good for the moment. Sorry to have bothered you all.
Thanks.
From: Dawid Wysakowicz
Sent: Friday, July 20, 2018 3:09:46 AM
To: Philip Doctor
Cc:
Oh you were asking about the cast exception, I haven't seen that before, sorry
to be off topic.
From: Philip Doctor
Sent: Thursday, July 19, 2018 9:27:15 PM
To: Gregory Fee; user
Subject: Re: org.apache.flink.streaming.runtime.streamrecord.LatencyM
I'm just a flink user, not an expert. I've seen that exception before. I have
never seen it be the actual error, I usually see it when some other operator is
throwing an uncaught exception and busy dying. It seems to me that the prior
operator throws this error "Can't forward to the next ope
Dear Flink Users,
I'm trying to upgrade to flink 1.5.0, so far everything works except for the
Queryable state client. Now here's where it gets weird. I have the client
sitting behind a web API so the rest of our non-java ecosystem can consume it.
I've got 2 tests, one calls my route directly
minimum like a confusing API to consume. Can you provide some guidance on
how I would consume this other than unsafely downcasting the contents of
Context<>?
physIQ
From: Chesnay Schepler
Sent: Saturday, July 14, 2018 3:54:33 AM
To: Ashwin Sinha;
Dear Flink Users,
I noticed my sink's `invoke` method was deprecated, so I went to update it, I'm
a little surprised by the new method signature, especially on Context
(copy+pasted below for ease of discussion). Shouldn't Context be Context
not Context ? based on the docs? I'm having a hard ti
you.
From: Stephan Ewen
Date: Thursday, March 1, 2018 at 9:26 AM
To: "user@flink.apache.org"
Cc: Philip Doctor
Subject: Re: Flink Kafka reads too many bytes Very rarely
Can you specify exactly where you have that excess of data?
Flink uses basically Kafka's standard consum
. If you’ve got an idea for a work around, I’d be all ears
too.
From: Philip Doctor
Date: Tuesday, February 27, 2018 at 10:02 PM
To: "Tzu-Li (Gordon) Tai" , Fabian Hueske
Cc: "user@flink.apache.org"
Subject: Re: Flink Kafka reads too many bytes Very rarely
Honestly
.
If I can help more, please let me know. Thank you for your replies.
-Phil
From: "Tzu-Li (Gordon) Tai"
Date: Tuesday, February 27, 2018 at 3:12 AM
To: Fabian Hueske , Philip Doctor
Cc: "user@flink.apache.org"
Subject: Re: Flink Kafka reads too many bytes Very rar
Hello,
I’m using Flink 1.4.0 with FlinkKafkaConsumer010 and have been for almost a
year. Recently, I started getting messages of the wrong length in Flink
causing my deserializer to fail. Let me share what I’ve learned:
1. All of my messages are 520 bytes exactly when my producer places th
I guess my logs suggest this is simply what a KeyedStream does by default, I
guess I was just trying to find a doc that said that rather than relying on my
logs.
From: Philip Doctor
Date: Tuesday, December 12, 2017 at 5:50 PM
To: "user@flink.apache.org"
Subject: Calling an operato
I’ve got a KeyedStream, I only want max parallelism (1) per key in the keyed
stream (i.e. if key is OrganizationFoo then only 1 input at a time from
OrganizationFoo is processed by this operator). I feel like this is obvious
somehow, but I’m struggling to find the docs for this. Can anyone poi
I have a GlobalWindow with a custom trigger (I leave windows open for a
variable length of time depending on how much data I have vs the expected
amount, so I’m manipulating triggerContext.registerProcessingTimeTimer()).
When I emit data into my data stream, the flink execution environment appea
I have a few Flink jobs. Several of them share the same code. I was wondering
if I could make those shared steps their own job and then specify that the sink
for one process was the source for another process, stiching my jobs together.
Is this possible ? I didn’t see it in the docs.. It feel
Huge thank you!
From: Ted Yu
Date: Monday, June 19, 2017 at 9:19 PM
To: Philip Doctor
Cc: "user@flink.apache.org"
Subject: Re: Possible Data Corruption?
See this thread:
http://search-hadoop.com/m/Flink/VkLeQm2nZm1Wa7Ny1?subj=Re+Painful+KryoException+java+lang+IndexOutOfBoundsEx
Dear Flink Users,
I have a Flink (v1.2.1) process I left running for the last five days. It
aggregates a bit of state and exposes it via Queryable State. It ran correctly
for the first 3 days. There were no code changes or data changes, but suddenly
Queryable State got weird. The process log
Hello,
My job differs slightly from example Queryable State jobs. I have a keyed
stream and I will emit managed ValueState at certain points at runtime but the
names aren’t entirely known beforehand.
I have check pointing enabled and when I restore from a check point, everything
*almost* works
Dear Flink Users,
I’m getting started with Flink and I’ve bumped into a small problem. I have a
keyed stream like this:
val stream = env.addSource(consumer)
.flatMap(new ValidationMap()).name("ValidationMap")
.keyBy(x => (x.getObj.foo(), x.getObj.bar(), x.getObj.baz()))
.flatMap(new Calcul
18 matches
Mail list logo