Hi Vishwas
1. You could use '-yt' to ship specified files to the class path, please
refer to [1] for more details.
2. If the properties are only loaded on client side before executing the
application, you could let your application to just read from local property
data. Flink support to
The keys means joint primary keys, it is not list of keys, in your case, maybe
there is a single key?
Best, Jingsong Lee
来自阿里邮箱 iPhone版
--Original Mail --
From:Flavio Pompermaier
Date:2019-06-28 22:53:31
Recipient:JingsongLee
CC:user
Subject:Re: LookupableTab
Hi ,
I am trying to add external property files to the flink classpath for
my application. These files are not a part of the fat jar. I put them
under the lib folder but flink cant find them? How can I manage
external property files that needs to be read by flink ?
Thanks,
Vishwas
Sorry I copied and pasted twice the current eval method...I'd do this:
public void eval(Object... keys) {
for (Object kkk : keys) {
Row keyRow = Row.of(kkk);
if (cache != null) {
List cachedRows = cache.getIfPresent(keyRow);
if (cachedRows != null) {
This could be a good fit, I'll try to dig into it and see if it can be
adapted to a REST service.
The only strange thing I see is that the key of the local cache is per
block of keys..am I wrong?
Shouldn't it cycle over the list of passed keys?
Right now it's the following:
Cache> cache;
public
This is due to flink doesn't unify the execution in different enviroments.
The community has discuss it before about how to enhance the flink client
api. The initial proposal is to introduce FlinkConf which contains all the
configuration so that we can unify the executions in all environments (IDE,
Hi, Singh,
I don't think that should work. The -D or -yD parameters needs to be passed
to the Flink start-up scripts or the "flink run" command. I don't think the
IntelliJ VM arguments are equivalent to that. In fact, I'm not aware of any
method to set "-D" parameters when running jobs IDE.
Thank
Hi Flavio:
I just implement a JDBCLookupFunction[1]. You can use it as table function[2].
Or use
blink temporal table join[3] (Need blink planner support).
I add a google guava cache in JDBCLookupFunction with configurable cacheMaxSize
(avoid memory OOM) and cacheExpireMs(For the fresh of lookup
Hi to all,
I have a use case where I'd like to enrich a stream using a rarely updated
lookup table.
Basically, I'd like to be able to set a refresh policy that is triggered
either when a key was not found (a new key has probably been added in the
mean time) or a configurable refresh-period has elap
Indeed looking at StreamElementSerializer the duplicate() method could be
bugged:
@Override
public StreamElementSerializer duplicate() {
TypeSerializer copy = typeSerializer.duplicate();
return (copy == typeSerializer) ? this : new
StreamElementSerializer(copy);
}
Is ti safe to return
Hey,
this page explains how to run a Flink job:
https://ci.apache.org/projects/flink/flink-docs-master/getting-started/tutorials/local_setup.html
On Sat, May 25, 2019 at 1:28 PM RAMALINGESWARA RAO THOTTEMPUDI <
tr...@iitkgp.ac.in> wrote:
>
> Respected All,
> I am a new learner of Apache Flink. I
Hi Fabian,
we had similar errors with Flink 1.3 [1][2] and the error was caused by the
fact that a serialised was sharing the same object with multiple threads.
The error was not deterministic and was changing from time to time.
So maybe it could be something similar (IMHO).
[1] http://codeha.us/a
Hi,
I've run it on a standalone Flink cluster. No Yarn involved.
From: Haibo Sun
Sent: Friday, June 28, 2019 6:13 AM
To: Vadim Vararu
Cc: user@flink.apache.org
Subject: Re:Flink batch job memory/disk leak when invoking set method on a
static Configuration object.
13 matches
Mail list logo