oint of time" events to
"period of time" events and I don't know if the nature of data have
changed. Also, the partial emission will lead to heterogeneous results.
BTW, the "Emission of dynamic tables" section seem to be a little
incompatible with the whole document..
ink-docs-release-1.4/dev/table/sql.html#joins>.
Note that due to some reasons, the UDTF left outer join cannot support
arbitrary conditions now.
Hope that helps.
Best,
Xingcan
On 15/01/2018 6:11 PM, XiangWei Huang wrote:
Hi all,
Is it possible to join records read from a kafka stream with o
assigners or the partitioning
mechanisms used.
Best,
Xingcan
> On 28 Feb 2018, at 5:46 AM, Thomas Weise wrote:
>
> Hi Xingcan,
>
> thanks, this is a good way of testing an individual operator. I had written
> my own mock code to intercept source context and collect the results
or but a different name) that can be used to replace the existing
split/select.
3) Keep split/select but change the behavior/semantic to be "correct".
Note that this is just a vote for gathering information, so feel free to
participate and share your opinions.
The voting time will end
7;s
no doubt that its concept has drifted.
As the split/select is quite an ancient API, I cc'ed this to more members. It
couldn't be better if you can share your opinions on this.
Thanks,
Xingcan
[1]
https://lists.apache.org/thread.html/f94ea5c97f96c705527dcc809b0e2b69e87a4c5d400cb7
Hi Aljoscha,
Thanks for your response.
With all this preliminary information collected, I’ll start a formal process.
Thank everybody for your attention.
Best,
Xingcan
> On Jul 8, 2019, at 10:17 AM, Aljoscha Krettek wrote:
>
> I think this would benefit from a FLIP, that neatly su
Congrats Rong!
Best,
Xingcan
> On Jul 11, 2019, at 1:08 PM, Shuyi Chen wrote:
>
> Congratulations, Rong!
>
> On Thu, Jul 11, 2019 at 8:26 AM Yu Li <mailto:car...@gmail.com>> wrote:
> Congratulations Rong!
>
> Best Regards,
> Yu
>
>
.
See [1][2] for an example.
Best,
Xingcan
[1]
https://github.com/apache/flink/blob/84eec21108f2c05fa872c9a3735457d73f75dc51/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/runtime/stream/table/TableSinkITCase.scala#L647
<https://github.com/apache/flink/b
Congrats Becket!
Best,
Xingcan
On Thu, Jul 18, 2019, 07:17 Dian Fu wrote:
> Congrats Becket!
>
> > 在 2019年7月18日,下午6:42,Danny Chan 写道:
> >
> >> Congratulations!
> >
> > Best,
> > Danny Chan
> > 在 2019年7月18日 +0800 PM6:29,Haibo Sun ,写道:
> >
know, it’s an essential requirement for some sophisticated joining algorithms.
Until now, the Flink non-equi joins can still only be executed single-threaded.
If we'd like to make some improvements on this, we should first take some
measures to support multicast pattern.
Best,
Xingcan
[1]
row.setField(i, fieldConverters[i].convert(record.get(i)));
}
return row;
};
```
Not sure if any of you hit this before. If it's confirmed to be a bug, I'll
file a ticket and try to fix it.
Best,
Xingcan
After rechecking it, I realized that some of my changes broke the expected
schema passed to GenericDatumReader#getResolver. The logic in Flink
codebase is okay and we should only read a portion of the Avro record.
Thanks, Xingcan
On Sun, Aug 6, 2023 at 2:31 PM liu ron wrote:
> Hi, Xing
Hi,
Thanks for bringing this up, Peter. I'm +1 for reverting the change.
Best,
Xingcan
On Thu, Dec 7, 2023 at 10:40 AM Martijn Visser
wrote:
> Hi all,
>
> Agree with what has been said already. I've marked
> https://issues.apache.org/jira/browse/FLINK-33523 as a block
Congratulations, Becket!
Best,
Xingcan
> On Oct 28, 2019, at 1:23 PM, Xuefu Z wrote:
>
> Congratulations, Becket!
>
> On Mon, Oct 28, 2019 at 10:08 AM Zhu Zhu wrote:
>
>> Congratulations Becket!
>>
>> Thanks,
>> Zhu Zhu
>>
>> Peter H
Thanks for driving this, Dawid.
I’m +1 on it.
One minor suggestion: I think it’s better to override the `equals()` and
`hashCode()` methods for `KeyConstraint`.
Thanks,
Xingcan
> On Nov 23, 2019, at 2:40 AM, Jingsong Li wrote:
>
> +1 thanks dawid for driving this.
>
> Best
Thanks, everyone!
It’s an honor which inspires me to devote more to our community.
Regards,
Xingcan
> On May 10, 2018, at 2:06 AM, Peter Huang wrote:
>
> Congratulations Nico and Xingcan!
>
> On Wed, May 9, 2018 at 11:04 AM, Thomas Weise wrote:
>
>> Congrats!
>&g
Hi Garvit,
you can use the `keyBy()` method[1] to partition a stream like the field
grouping in Storm.
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/#datastream-transformations
> On May 17, 2018, at 4:04 PM, Garvit Sharma wrote:
>
found here
https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/tableApi.html#joins
<https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/tableApi.html#joins>.
Best,
Xingcan
> On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer
> wrote:
>
>
in a
long-term. Thus as Timo suggested, keep the Scala codes in "flink-table-core"
would be a compromise solution.
3) If the community makes the final decision, maybe any new features should be
added in Java (regardless of the modules), in order to prevent the Scala codes
from growing.
optimization on large datasets or dynamic streams.
You could first start from the Calcite query optimizer, and then try to make
your own rules.
Best,
Xingcan
> On Jul 14, 2018, at 11:55 AM, vino yang wrote:
>
> Hi Albert,
>
> First I guess the query optimizer you mentioned is ab
ling that maybe we should
merge the retract message and upsert message into a unified “update message”.
(Append Stream VS Update Stream).
Best,
Xingcan
> On Aug 20, 2018, at 7:51 PM, Piotr Nowojski wrote:
>
> Hi,
>
> Thanks for bringing up this issue here.
>
> I’m not s
Congratulations, Gary!
Xingcan
> On Sep 7, 2018, at 11:20 PM, Hequn Cheng wrote:
>
> Congratulations Gary!
>
> Hequn
>
> On Fri, Sep 7, 2018 at 11:16 PM Matthias J. Sax wrote:
>
>> Congrats!
>>
>> On 09/07/2018 08:15 AM, Timo Walther wrote
this problem in a larger view, i.e., adding a
`PersistentService` rather than a `TablePersistentService` (as described in the
"Flink Services" section).
Thanks,
Xingcan
[1] https://issues.apache.org/jira/browse/FLINK-1730
> On Nov 20, 2018, at 8:56 AM, Becket Qin wrote:
>
>
. IMO, compared to data storage, the cache
could be volatile, which means it only works for (possibly?) accelerating and
doesn’t need to absolutely guarantee the existence of DataSets/Tables.
What do you think?
Best,
Xingcan
> On Nov 21, 2018, at 5:44 AM, Ruidong Li wrote:
>
>
enjoy the more interactive Table API, in case of a
general and flexible enough service mechanism.
Best,
Xingcan
> On Nov 22, 2018, at 10:16 AM, Xiaowei Jiang wrote:
>
> Relying on a callback for the temp table for clean up is not very reliable.
> There is no guarantee that it will
mechanisms for datasets with an identical schema but different
contents here). After all, it’s the dataset rather than the dynamic table that
need to be cached, right?
Best,
Xingcan
> On Nov 30, 2018, at 10:57 AM, Becket Qin wrote:
>
> Hi Piotrek and Jark,
>
> Thanks for t
.,
the docs must be synced when a new version is to be released).
Best,
Xingcan
> On Feb 11, 2019, at 6:23 AM, Jark Wu wrote:
>
> Hi Shaoxuan,
>
> Thank you for your feedback.
>
> If the author is not familiar with Chinese, he/she should create a
> translation JIRA be
ewing process and help translate the few
lines in sync.
Best,
Xingcan
> On Feb 12, 2019, at 7:04 AM, Jark Wu wrote:
>
> Hi @Sijie,
>
> Thank you for the valuable information. I will explore Docusaurus and
> feedback here.
>
> Best,
> Jark
>
> On Tue, 12 F
they really
need to be deprecated, we should at least mark the corresponding documentation
for that : )
What do you think?
Best,
Xingcan
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/side_output.html
<https://ci.apache.org/projects/flink/flink-docs-ma
DataStream API;
+ A more safely migration phase for users;
- Users are forced to update their codes;
- The contract of the API is still correct but is not valuable anymore;
Welcome to give your supplement on the reasons and after collecting all the
ideas, I'll start a voting thread.
B
Congratulations Jincheng and thanks for all you’ve done!
Cheers,
Xingcan
> On Jun 25, 2019, at 1:59 AM, Tzu-Li (Gordon) Tai wrote:
>
> Congratulations Jincheng, great to have you on board :)
>
> Cheers,
> Gordon
>
> On Tue, Jun 25, 2019, 11:31 AM Terry Wang wrot
and maybe we can use alias to support time attributes
setting (just hypothesis, not sure if it's feasible).
@Haohui I think the given query is valid if we add a aggregate
function to (PROCTIME()
- ROWTIME()) / 1000 and it should be executed efficiently.
Best,
Xingcan
On Wed, Feb 15,
3) The monotonic hint will be useful in the query optimization process.
What do you think?
Best,
Xingcan
[1]
SELECT t1.amount, t2.rate
FROM
table1 AS t1,
table2 AS t2
WHERE
t1.currency = t2.currency AND
t2.rowtime = (
SELECT MAX(t22.rowtime)
FROM tab
high-level things (e.g.
algorithms, performance) on top of it. What if we can change both the
edges' values and vertices' values during an iteration one day? :)
Best,
Xingcan
On Sat, Feb 25, 2017 at 2:43 AM, Vasiliki Kalavri wrote:
> Hi Greg,
>
> On 24 February 2017 at 18:09, G
as to
dynamically designate it in a SQL before)
Best,
Xingcan
On Wed, Mar 1, 2017 at 5:35 AM, Fabian Hueske wrote:
> Hi Jincheng Sun,
>
> registering watermark functions for different attributes to allow each of
> them to be used in a window is an interesting idea.
>
> However, watermark
Hi Pawan,
in Flink, most of the methods for DataSet (including print()) will just add
operators to the plan but not really run it. If the DASInputFormat has no
error, you can run the plan by calling environment.execute().
Best,
Xingcan
On Wed, Mar 1, 2017 at 12:17 PM, Pawan Manishka Gunarathna
Hi Pawan,
@Fabian was right and I thought it was stream environment. Sorry for that.
What do you mean by `read the available records of my datasource`? How do
you implement the nextRecord() method in DASInputFormat?
Best,
Xingcan
On Wed, Mar 1, 2017 at 4:45 PM, Fabian Hueske wrote:
>
when to release them
(maybe Flink will also do the auto-release detection when a dataset will
not be accessed any more).
Graph computing on stream is really attractive and maybe we should find
some use cases first. I am not sure if this paper [1] (and the
corresponding project [2]) will help.
Bes
" and "Order(3L, "diaper", 3)"
are out of order. Is that normal?
BTW, when I run `orderA.keyBy(2).map{x => x.amount + 1}.print()`, the order
for them can always be preserved.
Thanks,
Xingcan
time.SqlFunctions.internalToTimestamp(0L);
}
if (false) {
out.setField(2, null);
}
else {
out.setField(2, result$16);
}
...
Could you please help me explain what's the 0L timestamp mean?
Best,
Xingcan
On Tue, Apr 11, 2017 at 8:40 PM, Radu Tudoran
wrote:
> Hi Xingcan,
>
> If
Hi,
@Radu @Stefano, sorry that I misunderstood it before. We considered the
problem from different viewpoints. I agree that (ingestion) timestamp
injection could be a good solution for this problem in some scenarios.
Thanks.
@Fabian, thanks for your explanation. That makes sense.
Best,
Xingcan
processes execute independently and that's why
the first record 1 triggered window accumulation in you example.
Hope this helps,
Xingcan
On Thu, Apr 13, 2017 at 4:43 PM, madhairsilence
wrote:
> I have a datastream
> 1,2,3,4,5,6,7
>
> I applied a sliding countWindow as
&g
+1 (binding)
Thanks,
Xingcan
On Thu, Sep 24, 2020 at 4:52 AM Jark Wu wrote:
> +1 (binding)
>
> Best,
> Jark
>
> On Thu, 24 Sep 2020 at 16:22, Jingsong Li wrote:
>
> > +1 (binding)
> >
> > Best,
> > Jingsong
> >
> > On Thu, Sep 24,
e the name (works for nested schemas).
What do you think?
Best,
Xingcan
Hi Jark,
Yes. I believe field names of the table would be enough to describe the
conversion operator. I'll try to improve this.
Best,
Xingcan
On Sun, Mar 5, 2023 at 9:18 PM Jark Wu wrote:
> Hi Xingcan,
>
> I think `physicalDataType.toString()` is indeed verbose in this case.
Oh, I just realized that FLIP-195 has already solved this. We'll upgrade
our Flink version to 1.15+. Thanks!
On Mon, Mar 6, 2023 at 10:08 AM Xingcan Cui wrote:
> Hi Jark,
>
> Yes. I believe field names of the table would be enough to describe the
> conversion operator. I'
values in
it could potentially be very large. As DecimalType is backed by Java
BigDecimal, I wonder if we should extend the precision range.
Best,
Xingcan
ER in Oracle[1]), but
in Flink, we must explicitly specify the precision and scale.
Cc Jark, do you think this is a problem for flink-cdc-connectors?
Best,
Xingcan
[1]
https://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT313
On Mon, Aug 30, 2021 at 4:12 AM Timo Walther
declared without any precision
constraints). A user-defined numeric type converter would solve the problem!
Thanks,
Xingcan
On Mon, Aug 30, 2021 at 11:46 PM Jingsong Li wrote:
> Hi Xingcan,
>
> As a workaround, can we convert large decimal to varchar?
>
> If Flink SQL wants t
e health checking logic is coupled with the state fields, I'm
curious if they are stable now.
3. Can we apply the same logic to "FlinkSessionJob"?
Thanks,
Xingcan
Hi Gyula,
Thanks for the explanation!
The distinction between Flink jobs and FlinkDeployments makes sense! I'll
try to make some changes to Argo CD and hopefully can get some review from
you or other Flink-K8s-op contributors then.
Best,
Xingcan
On Wed, Nov 16, 2022 at 10:40 AM Gyula
Congratulations, Piotr!
Best, Xingcan
On Wed, Jul 8, 2020, 21:53 Yang Wang wrote:
> Congratulations Piotr!
>
>
> Best,
> Yang
>
> Dan Zou 于2020年7月8日周三 下午10:36写道:
>
> > Congratulations!
> >
> > Best,
> > Dan Zou
> >
> > > 2020年7月8日 下午5:25,godfrey he 写道:
> > >
> > > Congratulations
> >
> >
>
) thoughts about
unifying the batch/stream query processing.
I know there are lots of developers who are interested in this subject.
Please share your ideas and all suggestions are welcome.
Thanks,
Xingcan
parallelism. Will it only be executed in a single thread?
Thanks,
Xingcan
On Thu, May 18, 2017 at 11:40 AM, Hongyuhong wrote:
> Hi Xingcan,
> Thanks for the proposal.
> I have glanced at the design document but not detailedly. The semantics of
> Record-to-window Join is already in p
tween the old and new watermarks. Shall they
be one-to-one
mapping or the new watermarks could skip some timestamps? And (2) who is in
charge of
emitting the blocked watermarks, the operator or the process function?
I'd like to hear from you.
Best,
Xingcan
On Wed, Jul 26, 2017 at 10:40 A
tent, the randomness property determines that
it should never be used in time-sensitive applications. I always believe in
that all the information used for query evaluation should be acquired from
data itself.
Best,
Xingcan
On Thu, Jul 27, 2017 at 7:24 PM, Fabian Hueske wrote:
> Hi Shaoxu
Hi Fabian,
I got a similar question with Jark. Theoretically, the row times of two
streams
could be quite difference, e.g., one for today and the other one for
yesterday.
How can we align them?
Best,
Xingcan
On Mon, Jul 31, 2017 at 9:04 PM, Fabian Hueske wrote:
> Hi Jark,
>
>
it is unnecessary to buffer so much
data.
That raises the question. What if the timestamps of the two streams are
essentially
“irreconcilable"?
Best,
Xingcan
On Mon, Jul 31, 2017 at 10:42 PM, Shaoxuan Wang wrote:
> Xingcan,
> Watermark is the “estimate of completion”. User defines the
Congratulations!
On Wed, Nov 1, 2017 at 9:37 PM, Kurt Young wrote:
> Congrats and welcome on board!
>
> Best,
> Kurt
>
> On Wed, Nov 1, 2017 at 8:15 PM, Hai Zhou wrote:
>
>> Congratulations!
>>
>> On 1. Nov 2017, at 10:13, Shaoxuan Wang wrote:
>>
>> Congratulations!
>>
>> On Wed, Nov 1, 2017 a
nt test cases since everything got fine when I changed their names
(e.g., *ltime* => *lt* and *rtime* => *rt*) in one test case. Some global
shared variables may be the cause.
I wonder if anyone could give me some more specific clues about the
problem. IMO, even with identical field names, the test cases should
not interrelate with each other.
Thanks,
Xingcan
ll need to pay
attention to it.
Thanks,
Xingcan
On Fri, Dec 8, 2017 at 7:25 PM, Xingcan Cui wrote:
> Hi all,
>
> Recently I'm trying to add some tests to
> *org.apache.flink.table.api.stream.table.JoinTest*, but encountered a
> strange problem. A test case could successfull
Hi Thomas,
some test cases in JoinHarnessTest
<https://github.com/apache/flink/blob/release-1.4/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/harness/JoinHarnessTest.scala>
show how to verify the emitted watermarks.
Hope this helps.
Best,
Xingcan
> On 21
consider
much about the dependencies.
Best,
Xingcan
> On 27 Feb 2018, at 6:38 PM, Stephan Ewen wrote:
>
> My first intuition would be to go for approach #2 for the following reasons
>
> - I expect that in the long run, the scripts will not be that simple to
> maintain. We
Hi Vijay,
normally, maybe there’s no need to checkpoint the event times / watermarks
since they are automatically generated based on the records. What’s your
intention?
Best,
Xingcan
> On 27 Feb 2018, at 8:50 PM, vijay kansal wrote:
>
> Hi All
>
> Is there a way to checkp
assigners or the partitioning
mechanisms used.
Best,
Xingcan
> On 28 Feb 2018, at 5:46 AM, Thomas Weise wrote:
>
> Hi Xingcan,
>
> thanks, this is a good way of testing an individual operator. I had written
> my own mock code to intercept source context and collect the results
arge. Will there be any extra overhead after introducing
this feature?
Thanks,
Xingcan
On Mon, Jan 6, 2025 at 4:11 PM Weiqing Yang
wrote:
> Hi all,
>
> Just a gentle reminder regarding the proposal I shared on early fire
> support for Flink SQL interval joins. I’d greatly appreci
Hi Hang,
Just want to follow up on this. What's the current progress? Are there any
unassigned tickets we can help with?
Best,
Xingcan
On Thu, Dec 12, 2024 at 9:46 PM Hang Ruan wrote:
> Thanks, David & Peter.
>
> I would love to be the RM for jdbc-3.3.0. And the jdbc-3.
+1 (binding)
Best,
Xingcan
On Mon, Jan 27, 2025 at 8:50 PM Venkatakrishnan Sowrirajan
wrote:
> +1 (non-binding)
>
> Regards
> Venkata krishnan
>
>
> On Mon, Jan 27, 2025 at 2:05 PM Weiqing Yang
> wrote:
>
> > Hi All,
> >
> > I'd like to star
Hi Weiqing,
I don't have any more questions. The doc looks good to me.
Thanks,
Xingcan
On Wed, Jan 22, 2025 at 8:46 PM Venkatakrishnan Sowrirajan
wrote:
> Hi Weiqing,
>
> Thanks, that makes sense! Looks like I missed it.
>
> Regards
> Venkata krishnan
>
>
>
Xingcan Cui created FLINK-13849:
---
Summary: The back-pressure monitoring tab in Web UI may cause
errors
Key: FLINK-13849
URL: https://issues.apache.org/jira/browse/FLINK-13849
Project: Flink
Xingcan Cui created FLINK-32171:
---
Summary: Add PostStart hook to flink k8s operator helm
Key: FLINK-32171
URL: https://issues.apache.org/jira/browse/FLINK-32171
Project: Flink
Issue Type: New
Xingcan Cui created FLINK-33547:
---
Summary: Primitive SQL array type after upgrading to Flink 1.18.0
Key: FLINK-33547
URL: https://issues.apache.org/jira/browse/FLINK-33547
Project: Flink
Issue
Xingcan Cui created FLINK-9977:
--
Summary: Refine the docs for Table/SQL built-in functions
Key: FLINK-9977
URL: https://issues.apache.org/jira/browse/FLINK-9977
Project: Flink
Issue Type
Xingcan Cui created FLINK-10008:
---
Summary: Improve the LOG function in Table to support bases less
than 1
Key: FLINK-10008
URL: https://issues.apache.org/jira/browse/FLINK-10008
Project: Flink
Xingcan Cui created FLINK-10009:
---
Summary: Fix the casting problem for function TIMESTAMPADD in Table
Key: FLINK-10009
URL: https://issues.apache.org/jira/browse/FLINK-10009
Project: Flink
Xingcan Cui created FLINK-10014:
---
Summary: Fix the decimal literal parameter problem for arithmetic
functions in Table
Key: FLINK-10014
URL: https://issues.apache.org/jira/browse/FLINK-10014
Project
Xingcan Cui created FLINK-10049:
---
Summary: Unify the processing logic for NULL arguments in SQL
built-in functions
Key: FLINK-10049
URL: https://issues.apache.org/jira/browse/FLINK-10049
Project: Flink
Xingcan Cui created FLINK-10108:
---
Summary: DATE_FORMAT function in sql test throws a
NumberFormatException
Key: FLINK-10108
URL: https://issues.apache.org/jira/browse/FLINK-10108
Project: Flink
Xingcan Cui created FLINK-10201:
---
Summary: The batchTestUtil was mistakenly used in some stream sql
tests
Key: FLINK-10201
URL: https://issues.apache.org/jira/browse/FLINK-10201
Project: Flink
Xingcan Cui created FLINK-10323:
---
Summary: A single backslash cannot be successfully parsed in Java
Table API
Key: FLINK-10323
URL: https://issues.apache.org/jira/browse/FLINK-10323
Project: Flink
Xingcan Cui created FLINK-10463:
---
Summary: Null literal cannot be properly parsed in Java Table API
function call
Key: FLINK-10463
URL: https://issues.apache.org/jira/browse/FLINK-10463
Project: Flink
Xingcan Cui created FLINK-10684:
---
Summary: Improve the CSV reading process
Key: FLINK-10684
URL: https://issues.apache.org/jira/browse/FLINK-10684
Project: Flink
Issue Type: Improvement
Xingcan Cui created FLINK-11227:
---
Summary: The DescriptorProperties contains some bounds checking
errors
Key: FLINK-11227
URL: https://issues.apache.org/jira/browse/FLINK-11227
Project: Flink
Xingcan Cui created FLINK-11769:
---
Summary: The estimateDataTypesSize method in FlinkRelNode causes
NPE for Multiset
Key: FLINK-11769
URL: https://issues.apache.org/jira/browse/FLINK-11769
Project
Xingcan Cui created FLINK-12116:
---
Summary: Args autocast will cause exception for plan
transformation in TableAPI
Key: FLINK-12116
URL: https://issues.apache.org/jira/browse/FLINK-12116
Project: Flink
Xingcan Cui created FLINK-31021:
---
Summary: JavaCodeSplitter doesn't split static method properly
Key: FLINK-31021
URL: https://issues.apache.org/jira/browse/FLINK-31021
Project: Flink
Xingcan Cui created FLINK-34583:
---
Summary: Bug for dynamic table option hints with multiple CTEs
Key: FLINK-34583
URL: https://issues.apache.org/jira/browse/FLINK-34583
Project: Flink
Issue
Xingcan Cui created FLINK-34633:
---
Summary: Support unnesting array constants
Key: FLINK-34633
URL: https://issues.apache.org/jira/browse/FLINK-34633
Project: Flink
Issue Type: New Feature
Xingcan Cui created FLINK-34723:
---
Summary: Parquet writer should restrict map keys to be not null
Key: FLINK-34723
URL: https://issues.apache.org/jira/browse/FLINK-34723
Project: Flink
Issue
Xingcan Cui created FLINK-34926:
---
Summary: Adaptive auto parallelism doesn't work for a query
Key: FLINK-34926
URL: https://issues.apache.org/jira/browse/FLINK-34926
Project: Flink
Issue
Xingcan Cui created FLINK-35485:
---
Summary: JobMaster failed with "the job xx has not been finished"
Key: FLINK-35485
URL: https://issues.apache.org/jira/browse/FLINK-35485
Project: Flink
Xingcan Cui created FLINK-35486:
---
Summary: Potential sql expression generation issues on SQL gateway
Key: FLINK-35486
URL: https://issues.apache.org/jira/browse/FLINK-35486
Project: Flink
Xingcan Cui created FLINK-24007:
---
Summary: Support Avro timestamp conversion with precision greater
than three
Key: FLINK-24007
URL: https://issues.apache.org/jira/browse/FLINK-24007
Project: Flink
Xingcan Cui created FLINK-6936:
--
Summary: Add multiple targets support for custom partitioner
Key: FLINK-6936
URL: https://issues.apache.org/jira/browse/FLINK-6936
Project: Flink
Issue Type
Xingcan Cui created FLINK-7245:
--
Summary: Enhance the operators to support holding back watermarks
Key: FLINK-7245
URL: https://issues.apache.org/jira/browse/FLINK-7245
Project: Flink
Issue
Xingcan Cui created FLINK-7853:
--
Summary: Reject table function outer joins with predicates in
Table API
Key: FLINK-7853
URL: https://issues.apache.org/jira/browse/FLINK-7853
Project: Flink
Xingcan Cui created FLINK-7854:
--
Summary: Reject lateral table outer joins with predicates in SQL
Key: FLINK-7854
URL: https://issues.apache.org/jira/browse/FLINK-7854
Project: Flink
Issue Type
Xingcan Cui created FLINK-7865:
--
Summary: Remove predicate restrictions on TableFunction left outer
join
Key: FLINK-7865
URL: https://issues.apache.org/jira/browse/FLINK-7865
Project: Flink
Xingcan Cui created FLINK-8094:
--
Summary: Support other types for ExistingField rowtime extractor
Key: FLINK-8094
URL: https://issues.apache.org/jira/browse/FLINK-8094
Project: Flink
Issue Type
Xingcan Cui created FLINK-8257:
--
Summary: Unify the value checks for setParallelism()
Key: FLINK-8257
URL: https://issues.apache.org/jira/browse/FLINK-8257
Project: Flink
Issue Type
1 - 100 of 111 matches
Mail list logo