xiaojin.wy created FLINK-15310:
--
Summary: A timestamp result get by a select sql and a csvsink sql
is different
Key: FLINK-15310
URL: https://issues.apache.org/jira/browse/FLINK-15310
Project: Flink
Jingsong Lee created FLINK-15311:
Summary: Lz4BlockCompressionFactory should use native compressor
instead of java unsafe
Key: FLINK-15311
URL: https://issues.apache.org/jira/browse/FLINK-15311
Projec
Couldn't it simply be documented which jars are in the convenience jars
which are pre built and can be downloaded from the website? Then people who
need a custom version know which jars they need to provide to Flink?
Cheers,
Till
On Tue, Dec 17, 2019 at 6:49 PM Bowen Li wrote:
> I'm not sure pr
I think we should add this check list to the coding guidelines and continue
extending it there. Do you wanna update the coding guidelines accordingly
Yingjie?
Cheers,
Till
On Wed, Dec 18, 2019 at 8:21 AM Yingjie Cao wrote:
> Hi Till & Biao,
>
> Thanks for the reply.
>
> I agree that supplying s
Hi Hequn,
thanks for starting this discussion. In general I think it is a good idea
to release often. Hence, I also believe it is time for another bug fix
release for 1.9.
The thing I'm wondering is whether we are stretching our resources a bit
too much if we now start with a 1.9.2 release vote b
I'd like to do that.
Best,
Yingjie
Till Rohrmann 于2019年12月18日周三 下午4:48写道:
> I think we should add this check list to the coding guidelines and continue
> extending it there. Do you wanna update the coding guidelines accordingly
> Yingjie?
>
> Cheers,
> Till
>
> On Wed, Dec 18, 2019 at 8:21 AM Y
Thanks Yingjie for driving.
It is very useful to have this check list.
I think we can list all problematic third-party libraries.
Including hadoop jar:
org.apache.hadoop.fs.FileSystem.StatisticsDataReferenceCleaner.
Because there are too many libraries with this problem. And our Yarn mode
perJob
Zili Chen created FLINK-15312:
-
Summary: Remove PlanExposingEnvironment
Key: FLINK-15312
URL: https://issues.apache.org/jira/browse/FLINK-15312
Project: Flink
Issue Type: Sub-task
Compo
Jingsong Lee created FLINK-15313:
Summary: Can not insert decimal with precision into sink using
TypeInformation
Key: FLINK-15313
URL: https://issues.apache.org/jira/browse/FLINK-15313
Project: Flink
lining created FLINK-15314:
--
Summary: To refactor duplicated code in
TaskManagerDetailsHandler#createTaskManagerMetricsInfo
Key: FLINK-15314
URL: https://issues.apache.org/jira/browse/FLINK-15314
Project: Fl
lining created FLINK-15315:
--
Summary: Add test case for rest
Key: FLINK-15315
URL: https://issues.apache.org/jira/browse/FLINK-15315
Project: Flink
Issue Type: Improvement
Components: Runt
Hi ouywl,
*>>Thread.currentThread().getContextClassLoader();*
What does this statement mean in your program?
In addition, can you share your implementation of the customized file
system plugin and the related exception?
Best,
Vino
ouywl 于2019年12月18日周三 下午4:59写道:
> Hi all,
> We have im
Gary Yao created FLINK-15316:
Summary: SQL Client end-to-end test (Old planner) failed on Travis
Key: FLINK-15316
URL: https://issues.apache.org/jira/browse/FLINK-15316
Project: Flink
Issue Type:
Gary Yao created FLINK-15317:
Summary: State TTL Heap backend end-to-end test fails on Travis
Key: FLINK-15317
URL: https://issues.apache.org/jira/browse/FLINK-15317
Project: Flink
Issue Type: Bu
Hi Jincheng,
Yes, your help would be very helpful. Thanks a lot!
Best, Hequn
On Wed, Dec 18, 2019 at 4:52 PM Till Rohrmann wrote:
> Hi Hequn,
>
> thanks for starting this discussion. In general I think it is a good idea
> to release often. Hence, I also believe it is time for another bug fix
>
Hi Till,
I agree with your concerns and thanks a lot for your feedback!
In fact, I also have those concerns. The reasons I started the DISCUSS are:
- Considering the low capacities, I want to start the discussion earlier so
that we can have more time to collect information and planning for the
re
I just want to revive this discussion.
Recently, i am thinking about how to natively run flink per-job cluster on
Kubernetes.
The per-job mode on Kubernetes is very different from on Yarn. And we will
have
the same deployment requirements to the client and entry point.
1. Flink client not always
You could have a try the new plugin mechanism.
Create a new directory named "myhdfs" under $FLINK_HOME/plugins, and then
put your filesystem related jars in it.
Different plugins will be loaded by separate classloader to avoid conflict.
Best,
Yang
vino yang 于2019年12月18日周三 下午6:46写道:
> Hi ouywl,
Siddhesh Ghadi created FLINK-15318:
--
Summary: RocksDBWriteBatchPerformanceTest.benchMark fails on
ppc64le
Key: FLINK-15318
URL: https://issues.apache.org/jira/browse/FLINK-15318
Project: Flink
Yun Tang created FLINK-15319:
Summary: flink-end-to-end-tests-common-kafka fails due to timeout
Key: FLINK-15319
URL: https://issues.apache.org/jira/browse/FLINK-15319
Project: Flink
Issue Type:
Hi Jark,
Please see the reply below:
Regarding to option#3, my concern is that if we don't support streaming
> mode for bounded source,
> how could we create a testing source for streaming mode? Currently, all the
> testing source for streaming
> are bounded, so that the integration test will fin
hi peter:
we had extension SqlClent to support sql job submit in web base on
flink 1.9. we support submit to yarn on per job mode too.
in this case, the job graph generated on client side . I think this
discuss Mainly to improve api programme. but in my case , there is no jar
to upload
+1 This gives a better overview of the deployment targets and shows our
prospective users that they can rely on a broad set of vendors, if help is
needed.
I guess, Robert means if the vendor offers a managed service (like AWS
Kinesis Analytics), or licenses software (like Ververica Platform). This
Hi everyone,
following the discussion started by Seth [1] I would like to discuss
dropping the vendor specific repositories from Flink's parent pom.xml. As
building Flink against a vendor specific Hadoop version is no longer needed
(as it simply needs to be added to the classpath) and documented,
Hi,
As Yang Wang pointed out, you should use the new plugins mechanism.
If it doesn’t work, first make sure that you are shipping/distributing the
plugins jars correctly - the correct plugins directory structure both on the
client machine. Next make sure that the cluster has the same correct se
I was actually referring to "YARN", "Kubernetes", "Mesos".
If people know that AWS EMR is using YARN, they know which documentation to
look for in Flink.
On Wed, Dec 18, 2019 at 4:26 PM Konstantin Knauf
wrote:
> +1 This gives a better overview of the deployment targets and shows our
> prospecti
I guess we are talking about this profile [1] in the pom.xml?
+1 to remove.
I'm not sure if we need to rush this for the 1.10 release. The profile is
not doing us any harm at the moment.
[1]https://github.com/apache/flink/blob/master/pom.xml#L1035
On Wed, Dec 18, 2019 at 4:51 PM Till Rohrmann
Hi Tison,
Sorry for the late reply. I am busy with some internal urgent work last
week. I tried to read the FLIP-73, from my limited understanding.
The scope of this FLIP is to unify the deployment process of each type of
cluster management system. As the implementation details
is unclear, I have
Hi Yang,
Thanks for your input, I can see the master side job graph generation is a
common requirement for per job mode.
I think FLIP-73 is mainly for session mode. I think the proposal is a valid
improvement for existing CLI and per job mode.
Best Regards
Peter Huang
On Wed, Dec 18, 2019 at 3:
Hi folks,
As release-1.10 is under feature-freeze(The stateless Python UDF is already
supported), it is time for us to plan the features of PyFlink for the next
release.
To make sure the features supported in PyFlink are the mostly demanded for
the community, we'd like to get more people involved
lining created FLINK-15320:
--
Summary: JobManager crash in the model of standalone
Key: FLINK-15320
URL: https://issues.apache.org/jira/browse/FLINK-15320
Project: Flink
Issue Type: Bug
Com
Also CC user-zh.
Best,
Jincheng
jincheng sun 于2019年12月19日周四 上午10:20写道:
> Hi folks,
>
> As release-1.10 is under feature-freeze(The stateless Python UDF is
> already supported), it is time for us to plan the features of PyFlink for
> the next release.
>
> To make sure the features supported in
xiaojin.wy created FLINK-15321:
--
Summary: The result of sql(SELECT concat('a', cast(null as
varchar), 'c');) is NULL;
Key: FLINK-15321
URL: https://issues.apache.org/jira/browse/FLINK-15321
Project: Flin
Rui Li created FLINK-15322:
--
Summary: Parquet test fails with Hive versions prior to 1.2.0
Key: FLINK-15322
URL: https://issues.apache.org/jira/browse/FLINK-15322
Project: Flink
Issue Type: Test
Yu Li created FLINK-15323:
-
Summary: SQL Client end-to-end test (Old planner) failed on travis
Key: FLINK-15323
URL: https://issues.apache.org/jira/browse/FLINK-15323
Project: Flink
Issue Type: Bug
35 matches
Mail list logo