+1 to remove it.
Thanks,
SImon
On 07/29/2019 21:00,Till Rohrmann wrote:
+1 to remove it.
On Mon, Jul 29, 2019 at 1:27 PM Stephan Ewen wrote:
+1 to remove it
One should still be able to use MapR in the same way as any other vendor
Hadoop distribution.
On Mon, Jul 29, 2019 at 12:22 PM Jingso
Hi All
I want to use a custom catalog by setting the name “ca1” and create a
database under this catalog. When I submit the
SQL, and it raises the error like :
Exception in thread "main" org.apache.flink.table.api.ValidationException:
SQL validation failed. From line 1, column 98 to li
stered catalog, you could call
tableEnv.useCatalog() and .useDatabase().
As an alternative, you could fully qualify your table name with a
"catalog.db.table" syntax without switching current catalog/database.
Please try those and let me know if you find new problems.
Thanks,
Xuefu
On Mon,
in the default catalog.
To create table in your custom catalog, you could use
tableEnv.sqlUpdate("create table ").
Thanks,
Xuefu
On Mon, Aug 12, 2019 at 6:17 PM Simon Su wrote:
> Hi Xuefu
>
> Thanks for you reply.
>
> Actually I have tried it as your advises. I have
OK, Thanks Jark
Thanks,
SImon
On 08/13/2019 14:05,Jark Wu wrote:
Hi Simon,
This is a temporary workaround for 1.9 release. We will fix the behavior in
1.10, see FLINK-13461.
Regards,
Jark
On Tue, 13 Aug 2019 at 13:57, Simon Su wrote:
Hi Jark
Thanks for your reply.
It’s weird that
更倾向不去翻译Data Source和Data Sink, 通过用中文对其做解释即可
Thanks,
SImon
On 08/13/2019 18:07, wrote:
How about translate "data sink" into “数据漕”
漕,读作:cáo。汉字基本字义指通过水道运输粮食:漕运|漕粮。==>
https://baike.baidu.com/item/%E6%BC%95?forcehttps=1%3Ffr%3Dkg_hanyu
- 原始邮件 -
发件人:Kurt Young
收件人:dev , user-zh
主题:
Hi all
I’m trying to build flink 1.9 release branch, it raises the error like:
Could not resolve dependencies for project
org.apache.flink:flink-s3-fs-hadoop:jar:1.9-SNAPSHOT: Could not find artifact
org.apache.flink:flink-fs-hadoop-shaded:jar:tests:1.9-SNAPSHOT in maven-ali
(http://maven.al
Hi all
I want to test to submit a job from my local IDE and I deployed a Flink
cluster in my vm.
Here is my code from Flink 1.9 document and add some of my parameters.
public static void main(String[] args) throws Exception {
ExecutionEnvironment env = ExecutionEnvironment
.createRemoteE
on a remote cluster from the IDE you need to first
build the jar containing your user code. This jar needs to passed to
createRemoteEnvironment() so that the Flink client knows which jar to upload.
Hence, please make sure that /tmp/myudf.jar contains your user code.
Cheers,
Till
On Thu, O
Hi All
Does current Flink support to set checkpoint properties while using Flink SQL ?
For example, statebackend choices, checkpoint interval and so on ...
Thanks,
SImon
Simon Su created FLINK-13492:
Summary: BoundedOutOfOrderTimestamps cause Watermark's timestamp
leak
Key: FLINK-13492
URL: https://issues.apache.org/jira/browse/FLINK-13492
Project: Flink
Simon Su created FLINK-28820:
Summary: Pulsar Connector PulsarSink performance issue when
delivery guarantee is not NONE
Key: FLINK-28820
URL: https://issues.apache.org/jira/browse/FLINK-28820
Project
12 matches
Mail list logo