Hi,
Does flink support ceph filesystem? I have look through the official documents
and didn’t find that .
Best
I have met this error when I build with a wrong jdk version. May be you should
change to a newly jdk version, maby jdk 1.8?
Original Message
Sender:syedms110400...@vu.edu.pk
Recipient:useru...@flink.apache.org
Date:Monday, May 6, 2019 09:20
Subject:Unable to build flink from source
Hi I am try
Hi,
I found flink now support the orc file writer in the module
flink-connectors/flink-orc. Could any one show me a basic usage about this
module ? I didn’t find that on the official site and the internet.
Many thanks.
Hi,
Could anyone give a simple way to write a DataSetString into hdfs using a
simple way?
I look up the official document, and didn’t find that, am I missing some thing ?
Many thanks.
putFormat as in [1].
The provided link use a pretty old version of Flink but it should not be a big
problem to update the maven dependencies and the code to a newer version.
Best,
Flavio
[1]https://github.com/okkam-it/flink-mongodb-test
On Mon, Apr 29, 2019 at 6:15 AM Hai h...@magicsoho
am-it/flink-mongodb-test
On Mon, Apr 29, 2019 at 6:15 AM Hai h...@magicsoho.com wrote:
Hi,
Can anyone give me a clue about how to read mongodb’s data as a batch/streaming
datasource in Flink? I don’t find the mongodb connector in recent release
version .
Many thanks
Hi,
Can anyone give me a clue about how to read mongodb’s data as a batch/streaming
datasource in Flink? I don’t find the mongodb connector in recent release
version .
Many thanks
Hi,
Recently I met a issue which relevant to the class loader of user application
and the flink ’s own class loader.
I want to solve this issue[1]byfind out the right class loader of the user jar.
If anyone colud show me the Flink production’s start up class loader sequence.
I would apprecia
Hi Fabian:
OK ,I am glad to do that.
Regards
Original Message
Sender: Fabian Hueske
Recipient: hai
Cc: user; Yun Tang
Date: Monday, Apr 15, 2019 17:16
Subject: Re: Hbase Connector failed when deployed to yarn
Hi,
The Jira issue is still unassigned.
Would you be up to work on a fix
Hello:
Is there a example or best practise code of flink’s source of Scala language,
I found one example on official code’s HBaseWriteStreamExample:
DataStreamString dataStream = env.addSource(new SourceFunctionString() {
private static final long serialVersionUID = 1L;
private volatile boolea
:02
Subject:Re: Hbase Connector failed when deployed to yarn
Hi
I believe this is the same problem which reported in
https://issues.apache.org/jira/browse/FLINK-12163 , current work around
solution is to put flink-hadoop-compatibility jar under FLINK_HOME/lib.
Best
Yun Tang
From: hai h
And my pom.xml dependencies is :
dependencies
!-- Scala --
dependency
groupIdorg.scala-lang/groupId
artifactIdscala-library/artifactId
version${scala.version}/version
/dependency
dependency
groupIdorg.scala-lang/groupId
artifactIdscala-compiler/artifa
Hello:
I am new to flink, and I copy the official Hbase connector examples from
source
flink/flink-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseWriteExample.java
and run in a yarn-cluster with the command:
bin/flink run -m yarn-cluster -yn 2 -c {class-path-pr
Hi Tony,
you can consider implementing a reporter, use a trick to convert the flink's
metrics to the structure that suits your needs.
This is just my personal practice, hoping to help you.
Cheers,
Hai Zhou
> 在 2017年9月26日,17:49,Tony Wei 写道:
>
> Hi,
>
>
+1
> 在 2017年9月19日,17:56,Aljoscha Krettek 写道:
>
> Hi,
>
> Talking to some people I get the impression that Scala 2.10 is quite outdated
> by now. I would like to drop support for Scala 2.10 and my main motivation is
> that this would allow us to drop our custom Flakka build of Akka that we use
I would like to ask what is “PB object”?
Thanks.
Hai Zhou
> 在 2017年8月15日,09:53,mingleizhang <18717838...@163.com> 写道:
>
> Thanks, Nico. I tried flink1.3.2. Works now. Thank you very much! I think
> there should be something else to cause this error to happen. Not only t
16 matches
Mail list logo