Hey,
We don’t have minimal supported version in the docs as we haven’t
experienced any issue specific to kubernetes versions so far.
We don’t really rely on any newer features
Cheers
Gyula
On Fri, 6 Oct 2023 at 06:02, Krzysztof Chmielewski <
krzysiek.chmielew...@gmail.com> wrote:
> It seems t
It seems that problem was caused by k8s 1.19.
When we deployed Flink operator on vanilla k8s 1.19 we got the same error
that we have on OKD 4.6.0 We are planing to update OKD to newer version
that will use more up to date k8s.
What is the minimal k8s version required for/supported by Flink operato
Thank you Zach,
our flink-operator and flink deployments are in same namespace -> called
"flink". We have executed what is described in [1] before my initial
message.
We are using OKD 4.6.0 that according to the doc is using k8s 1.19. the
very same config is working fine on "vanilla" k8s, but for s
I haven’t used OKD but it sounds like OLM. If that’s the case, I’m assuming
the operator was deployed to the “operators” namespace. In that case,
you’ll need to create the RBACs and such in the Flink namespace for that
deployment to work.
For example this needs to be in each namespace that you wan
Hi Arvid Heise,
Thanks for your reply! It's not classical sensor aggregation.
The reason for not using window join is the very long time gap between
patient's behaviors.
There is a long gap of days even months between the appointment of doctor and
the visit, and between tests and betwee
Can you please describe your actual use case? What do you want to achieve
low-latency or high-throughput? What are the consumers of the produced
dataset?
It sounds to me as if this is classical sensor aggregation. I have not
heard of any sensor aggregation that doesn't use windowing. So you'd
usua
Hi!
Changes of input tables will cause corresponding changes in output table
Which sink are you using? If it is an upsert sink then Flink SQL planner
will filter out UPDATE_BEFORE messages automatically. Also if your sink
supports something like "ignore delete messages" it can also filter out
de
Hi Nick,
yes, you can be lucky that no involved classes have changed (much), but
there is no guarantee.
You could try to fiddle around and add the respective class (
*ClosureCleanerLevel)* from Flink 1.9 in your jar, but it's hacky at best.
Another option is to bundle Flink 1.9 with your code if
Hi Nick,
all Flink dependencies are only compatible with the same major version.
You can workaround it by checking out the code [1] and manually set the
dependency of the respective module to your flink-core version and revert
all changes that are not compiling. But there is no guarantee that thi
Hi all,
Thanks for the input. Much appreciated.
Regards,
Wouter
Op ma 4 mrt. 2019 om 20:40 schreef Addison Higham :
> Hi there,
>
> As far as a runtime for students, it seems like docker is your best bet.
> However, you could have them instead package a jar using some interface
> (for example,
Hi there,
As far as a runtime for students, it seems like docker is your best bet.
However, you could have them instead package a jar using some interface
(for example, see
https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/packaging.html,
which details the `Program` interface) and th
Hey all,
Thanks for the replies. The issues we were running into (which are not
specific to Docker):
- Students changing the template wrongly failed the container.
- We give full points if the output matches our solutions (and none
otherwise), but it would be nice if we could give partial grades p
Hi Wouter,
We are using Docker Compose (Flink JM, Flink TM, Kafka, Zookeeper) setups
for our trainings and it is working very well.
We have an additional container that feeds a Kafka topic via the
commandline producer to simulate a somewhat realistic behavior.
Of course, you can do it without Kafk
It would help to understand the current issues that you have with this
approach? I used a similar approach (not with Flink, but a similar big data
technology) some years ago
> Am 04.03.2019 um 11:32 schrieb Wouter Zorgdrager :
>
> Hi all,
>
> I'm working on a setup to use Apache Flink in an as
Hi Chesnay,
Thanks for the reply, do you know how to serve using the trained model?
Where is the model saved?
Regards,
Adarsh
On Wed, Nov 1, 2017 at 4:46 PM, Chesnay Schepler wrote:
> I don't believe this to be possible. The ML library works exclusively with
> the Batch API.
>
>
> On 30.
I don't believe this to be possible. The ML library works exclusively
with the Batch API.
On 30.10.2017 12:52, Adarsh Jain wrote:
Hi,
Is there a way to use Stochastic Outlier Selection (SOS) and/or SVM
using CoCoA with streaming data.
Please suggest and give pointers.
Regards,
Adarsh
Oliver Swoboda wrote:
Hi Josh, thank you for your quick answer!
2016-11-03 17:03 GMT+01:00 Josh Elser mailto:els...@apache.org>>:
Hi Oliver,
Cool stuff. I wish I knew more about Flink to make some better
suggestions. Some points inline, and sorry in advance if I suggest
somet
Hi Josh, thank you for your quick answer!
2016-11-03 17:03 GMT+01:00 Josh Elser :
> Hi Oliver,
>
> Cool stuff. I wish I knew more about Flink to make some better
> suggestions. Some points inline, and sorry in advance if I suggest
> something outright wrong. Hopefully someone from the Flink side
Hi Oliver,
Cool stuff. I wish I knew more about Flink to make some better
suggestions. Some points inline, and sorry in advance if I suggest
something outright wrong. Hopefully someone from the Flink side can help
give context where necessary :)
Oliver Swoboda wrote:
Hello,
I'm using Flink
Hi Govindarajan,
Regarding the stagnant Kakfa offsets, it’ll be helpful if you can supply more
information for the following to help us identify the cause:
1. What is your checkpointing interval set to?
2. Did you happen to have set the “max.partition.fetch.bytes” property in the
properties give
Hi Govindarajan,
you can broadcast the stream with debug logger information by calling
`stream.broadcast`. Then every stream record should be send to all
sub-tasks of the downstream operator.
Cheers,
Till
On Mon, Oct 3, 2016 at 5:13 PM, Govindarajan Srinivasaraghavan <
govindragh...@gmail.com> w
FYI: FLINK-4497 [1] requests Scala tuple and case class support for the
Cassandra sink and was opened about a month ago.
[1] https://issues.apache.org/jira/browse/FLINK-4497
2016-09-30 23:14 GMT+02:00 Stephan Ewen :
> How hard would it be to add case class support?
>
> Internally, tuples and cas
Hi Gordon,
- I'm currently running my flink program on 1.2 SNAPSHOT with kafka source
and I have checkpoint enabled. When I look at the consumer offsets in kafka
it appears to be stagnant and there is a huge lag. But I can see my flink
program is in pace with kafka source in JMX metrics and output
Hi!
- I'm currently running my flink program on 1.2 SNAPSHOT with kafka source and
I have checkpoint enabled. When I look at the consumer offsets in kafka it
appears to be stagnant and there is a huge lag. But I can see my flink program
is in pace with kafka source in JMX metrics and outputs. I
How hard would it be to add case class support?
Internally, tuples and case classes are treated quite similar, so I think
it could be a quite simple extension...
On Fri, Sep 30, 2016 at 4:22 PM, Sanne de Roever
wrote:
> Thanks Chesnay. Have a good weekend.
>
> On Thu, Sep 29, 2016 at 5:03 PM, C
Thanks Chesnay. Have a good weekend.
On Thu, Sep 29, 2016 at 5:03 PM, Chesnay Schepler
wrote:
> the cassandra sink only supports java tuples and POJO's.
>
>
> On 29.09.2016 16:33, Sanne de Roever wrote:
>
>> Hi,
>>
>> Does the Cassandra sink support Scala and case classes? It looks like
>> using
the cassandra sink only supports java tuples and POJO's.
On 29.09.2016 16:33, Sanne de Roever wrote:
Hi,
Does the Cassandra sink support Scala and case classes? It looks like
using Java is at the moment best practice.
Cheers,
Sanne
Ok, thanks Aljoscha.
As an alternative to using Flink to maintain the schedule state, I could
take the (e, t2) stream and write to a external key-value store with a
bucket for each minute. Then have a separate service which polls the
key-value store every minute and retrieves the current bucket, a
Hi Josh,
I'll have to think a bit about that one. Once I have something I'll get
back to you.
Best,
Aljoscha
On Wed, 8 Jun 2016 at 21:47 Josh wrote:
> This is just a question about a potential use case for Flink:
>
> I have a Flink job which receives tuples with an event id and a timestamp
> (e
Thanks!
On Thu, Dec 10, 2015 at 12:32 PM, Maximilian Michels wrote:
> Hi Cory,
>
> The issue has been fixed in the master and the latest Maven snapshot.
> https://issues.apache.org/jira/browse/FLINK-3143
>
> Cheers,
> Max
>
> On Tue, Dec 8, 2015 at 12:35 PM, Maximilian Michels
> wrote:
> > Than
Hi Cory,
The issue has been fixed in the master and the latest Maven snapshot.
https://issues.apache.org/jira/browse/FLINK-3143
Cheers,
Max
On Tue, Dec 8, 2015 at 12:35 PM, Maximilian Michels wrote:
> Thanks for the stack trace, Cory. Looks like you were on the right
> path with the Spark issue
Thanks for the stack trace, Cory. Looks like you were on the right
path with the Spark issue. We will file an issue and correct it soon.
Thanks,
Max
On Mon, Dec 7, 2015 at 8:20 PM, Stephan Ewen wrote:
> Sorry, correcting myself:
>
> The ClosureCleaner uses Kryo's bundled ASM 4 without any reason
Sorry, correcting myself:
The ClosureCleaner uses Kryo's bundled ASM 4 without any reason - simply
adjusting the imports to use the common ASM (which is 5.0) should do it ;-)
On Mon, Dec 7, 2015 at 8:18 PM, Stephan Ewen wrote:
> Flink's own asm is 5.0, but the Kryo version used in Flink bundles
Flink's own asm is 5.0, but the Kryo version used in Flink bundles
reflectasm with a dedicated asm version 4 (no lambdas supported).
Might be as simple as bumping the kryo version...
On Mon, Dec 7, 2015 at 7:59 PM, Cory Monty
wrote:
> Thanks, Max.
>
> Here is the stack trace I receive:
>
> ja
Thanks, Max.
Here is the stack trace I receive:
java.lang.IllegalArgumentException:
at
com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.(Unknown
Source)
at
com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.(Unknown
Source)
at
com.esotericsoftware.reflectasm
For completeness, could you provide a stack trace of the error message?
On Mon, Dec 7, 2015 at 6:56 PM, Maximilian Michels wrote:
> Hi Cory,
>
> Thanks for reporting the issue. Scala should run independently of the
> Java version. We are already using ASM version 5.0.4. However, some
> code uses
Hi Cory,
Thanks for reporting the issue. Scala should run independently of the
Java version. We are already using ASM version 5.0.4. However, some
code uses the ASM4 op codes which don't seem to be work with Java 8.
This needs to be fixed. I'm filing a JIRA.
Cheers,
Max
On Mon, Dec 7, 2015 at 4:
Hey Jerry,
Jay is on the right track, this issue has to do with the Flink operator
life cycle. When you hit execute all your user defined classes get
serialized, so that they can be shipped to the workers on the cluster. To
execute some code before your FlatMapFunction starts processing the data
y
Maybe wrapping Jedis with a serializable class will do the trick?
But in general is there a way to reference jar classes in flink apps without
serializable them?
> On Sep 4, 2015, at 1:36 PM, Jerry Peng wrote:
>
> Hello,
>
> So I am trying to use jedis (redis java client) with Flink streami
39 matches
Mail list logo