Hi Jörn,
I tried that. Here is my snippet :
String[] loadedlibs =
getLoadedLibraries(Thread.currentThread().getContextClassLoader());
if(!containsVibeSimpleLib(loadedlibs)) {
System.loadLibrary("vibesimplejava");
}
Now I get the exception Unexpected errorjava.lang.UnsatisfiedLinkError:
com.vo
I don’t know Dylibs in detail, but can you call a static method where it checks
if it has been already executed and if not then it loads the library (Singleton
pattern)?
> Am 27.08.2019 um 06:39 schrieb Vishwas Siravara :
>
> Hi guys,
> I have a flink application that loads a dylib like this
>
Hi guys,
I have a flink application that loads a dylib like this
System.loadLibrary("vibesimplejava");
The application runs fine , when I restart the job I get this exception :
com.visa.aip.cryptolib.aipcyptoclient.EncryptionException: Unexpected
errorjava.lang.UnsatisfiedLinkError: Native Libr
Hi Qi,
If you want to get better isolation between different flink jobs and
multi-tenant support,
i suggest you to use the per-job mode. Each flink job is a yarn
application,
and yarn use cgroup to limit the resource used by each application.
Best,
Yang
Qi Kang 于2019年8月26日周一 下午9:02写道:
> H
-- Forwarded message -
发件人: orlando qi
Date: 2019年8月23日周五 上午10:44
Subject: FLINK TABLE API UDAF QUESTION, For heap backends, the new state
serializer must not be incompatible
To:
Hello everyone:
I defined a UDAF function when I am using the FLINK TABLE API to
achieve the a
Hi
on 2019/8/27 11:35, Simon Su wrote:
Could not resolve dependencies for project
org.apache.flink:flink-s3-fs-hadoop:jar:1.9-SNAPSHOT: Could not find
artifact org.apache.flink:flink-fs-hadoop-shaded:jar:tests:1.9-SNAPSHOT
in maven-ali (http://maven.aliyun.com/nexus/content/groups/public/)
A
Hi all
I’m trying to build flink 1.9 release branch, it raises the error like:
Could not resolve dependencies for project
org.apache.flink:flink-s3-fs-hadoop:jar:1.9-SNAPSHOT: Could not find artifact
org.apache.flink:flink-fs-hadoop-shaded:jar:tests:1.9-SNAPSHOT in maven-ali
(http://maven.al
Hi,
You can read Hive related documentation [1] first. Should be ok to just
specify the Hive version as 1.2.1 for your Cloudera Hive 1.1.0 deployment.
Hive 1.1 will be officially supported in Flink 1.10.
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/hive/index.html#su
Put flink-connector-hive jar in classpath
On Sun, Aug 25, 2019 at 9:14 AM Yebgenya Lazarkhosrouabadi <
lazarkhosrouab...@integration-factory.de> wrote:
> Hello,
>
>
>
> I’m trying to use hivecatalog in flink1.9. I modified the yaml file like
> this:
>
>
>
>
>
> catalogs:
>
> - name: mynewhive
Yes. you are right. SplittableIterator will cause each worker list all the
files. thanks!
best
Lu
On Fri, Aug 16, 2019 at 12:33 AM Zhu Zhu wrote:
> Hi Lu,
>
> I think it's OK to choose any way as long as it works.
> Though I've no idea how you would extend SplittableIterator in your case.
> The
Thanks Till and Zili!
I see that docker-flink repo now has 1.9 set up, we are only waiting for it
to be pushed to Docker Hub. We should be fine once that is done.
Thanks again!
---
Oytun Tez
*M O T A W O R D*
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.c
Thanks for the clear explanation ..
On Mon, Aug 26, 2019 at 10:34 PM Seth Wiesman wrote:
> Hi Debasish,
>
> As it seems your aware TypeInformation is Flink’s internal type system
> used for serialization between tasks and in/out of state backends.
>
> The issue you are seeing is because you are
actually the scala and java code are completely separate - in fact they are
part of separate test suites. We have both scala and Java API in our
application but they r completely separate .. and yeah in Scala the
implicits did the trick while I had to pass the TypeInformation explicitly
with addSou
Glad that you sort it out and sorry for the late reply.
yes. I think the problem is how your `TypeInformation` for `Data` is being
passed to the DataStreamSource construct.
Regarding why scala side works but not java, there might've been something
to do with the implicit variable passing for your
Hi,
I am using the command
" ./bin/flink run ./examples/batch/WordCount.jar --input
/home/trrao/Desktop/ram2.txt --output /home/trrao/Desktop/ramop.txt "
But I am getting " Caused by: java.net.ConnectException:connection refused"
Kindly give the correct to run the wordcount example in flink
Hi Yang,
Many thanks for your detailed explanation. We are using Hadoop 2.6.5, so
setting multiple-assignments-enabled parameter is not an option.
BTW, do you prefer using YARN session cluster rather than per-job cluster under
this situation? These YARN nodes are almost dedicated to Flink job
I would use a regular ProcessFunction, not a WindowProcessFunction.
The final WM depends on how the records were partitioned at the watermark
assigner (and the assigner itself).
AFAIK, the distribution of files to source reader tasks is not
deterministic. Hence, the final WM changes from run to ru
Hi Qi Kang,
If you means to spread out all taskmanager evenly across the yarn cluster,
it is a pity that flink could do nothing.
Each per-job flink cluster is an individual application on the yarn
cluster, they do not know the existence of others.
Could share the yarn version? If it is above hado
You said “ You can use a custom ProcessFunction and compare the timestamp of
each record with the current watermark.”.
Does the window process function has all the events – even the ones that are
dropped due to lateness?
from what I’m understand the “ iterable” argument I contains the record
Hi Jungtaek,
Sorry for the slow reply and thanks for the feedback on the book! :-)
As I said, I don't think Flink's windowing API is well suited to deal with
the problem of manually terminated session windows due lack of support to
split windows.
Given that Spark has similar support for timers, I
Hi,
The paths of the files to read are distributed across all reader / source
tasks and each task reads the files in order of their modification
timestamp.
The watermark generator is not aware of any files and just looks at the
stream of records produced by the source tasks.
You need to chose the
Hi Vinod,
This sounds like a watermark issue to me.
The commonly used watermark strategies (like bounded out-of-order) are only
advancing when there is a new record.
Moreover, the current watermark is the minimum of the current watermarks of
all input partitions.
So, the watermark only moves forwa
Hi,
We got 3 Flink jobs running on a 10-node YARN cluster. The jobs were submitted
in a per-job flavor, with same parallelism (10) and number of slots per TM (2).
We originally assumed that TMs should automatically spread across the cluster,
but what came out was just the opposite: All 5 TMs
I'd like to provide a custom serializer for a POJO class. But that class
cannot be modified so it's not possible to add a @TypeInfo annotation to
it. Are there any other ways to register one?
The data source is generated by an application that monitors some sort of
sessions.
With the EVENT_TIME column being the session end time .
It is possible that the files will have out of order data , because of the
async nature of the application writing files.
While the EVENT_TIME is monoton
Hi,
Can you share a few more details about the data source?
Are you continuously ingesting files from a folder?
You are correct, that the parallelism should not affect the results, but
there are a few things that can affect that:
1) non-determnistic keys
2) out-of-order data with inappropriate wa
Hi Oytun,
I think it intents to publish flink-queryable-state-client-java
without scala suffix since it is scala-free. An artifact without
scala suffix has been published [2].
See also [1].
Best,
tison.
[1] https://issues.apache.org/jira/browse/FLINK-12602
[2]
https://mvnrepository.com/artifact
The missing support for the Scala shell with Scala 2.12 was documented in
the 1.7 release notes [1].
@Oytun, the docker image should be updated in a bit. Sorry for the
inconveniences. Thanks for the pointer that
flink-queryable-state-client-java_2.11 hasn't been published. We'll upload
this in a b
Hi,
Brief update.
I tried to run the same code, but this time I used another Kafka cluster that I
have where the version is 0.11.
The code runs fine without the timeout exception.
In conclusion, it seems like the problem occurs only when consuming events from
Kafka 0.9. currently, I have no ide
29 matches
Mail list logo