zhangminglei created FLINK-17921:
Summary: RpcGlobalAggregateManager#updateGlobalAggregate would
cause akka.timeout
Key: FLINK-17921
URL: https://issues.apache.org/jira/browse/FLINK-17921
Project
zhangminglei created FLINK-17919:
Summary: KafkaConsumerThread should add ratelimiter functionality
as well
Key: FLINK-17919
URL: https://issues.apache.org/jira/browse/FLINK-17919
Project: Flink
zhangminglei created FLINK-10114:
Summary: Support Orc for StreamingFileSink
Key: FLINK-10114
URL: https://issues.apache.org/jira/browse/FLINK-10114
Project: Flink
Issue Type: Sub-task
Hi, I would like to ask 2 questions.
1. Currently, what is the problem of flink join ? And what is the
essential difference between batch join and stream join ?
2. What are the shortcomings of current exactly-once ?
Thanks
minglei.
zhangminglei created FLINK-9985:
---
Summary: Incorrect parameter order in document
Key: FLINK-9985
URL: https://issues.apache.org/jira/browse/FLINK-9985
Project: Flink
Issue Type: Bug
zhangminglei created FLINK-9982:
---
Summary: NPE in EnumValueSerializer#copy
Key: FLINK-9982
URL: https://issues.apache.org/jira/browse/FLINK-9982
Project: Flink
Issue Type: Bug
zhangminglei created FLINK-9901:
---
Summary: Refactor InputStreamReader to Channels.newReader
Key: FLINK-9901
URL: https://issues.apache.org/jira/browse/FLINK-9901
Project: Flink
Issue Type: Sub
zhangminglei created FLINK-9900:
---
Summary: Failed to testRestoreBehaviourWithFaultyStateHandles
(org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)
Key: FLINK-9900
URL: https://issues.apache.org
Hi, Craig
The patch you attached there seems do not follow flink community specification.
Could you please link to a Github pull request in there ?
Cheers
Minglei
> 在 2018年6月28日,上午3:56,Foster, Craig 写道:
>
> Pinging. Is it possible for someone to take a look at this or is this message
> going
it ends up
with too many timers in the java heap which might leads to OOM.
Cheers
Shimin
> 在 2018年6月27日,下午5:34,zhangminglei <18717838...@163.com> 写道:
>
> Aitozi
>
> From my side, I do not think distinct is very easy to deal with. Even though
> together work with ka
me.
>
> However, the time resolution of this operator is 1 millisecond, so it ends up
> with too many timers in the java heap which might leads to OOM.
>
> Cheers
> Shimin
>
> 2018-06-27 17:34 GMT+08:00 zhangminglei <18717838...@163.com
> <mailto:18717838...
80 task running, a task here is
a consumer operator] for 80 number of partitions if you set the kafka partition
number is 80.
DataStream dataStream =
env.addSource(kafkaConsumer08).setParallelism(80);
Cheers
Minglei
> 在 2018年6月25日,下午6:02,Amol S - iProgrammer 写道:
>
> Thanks zha
Hi, Amol
As @Sihua said. Also in my case, if the kafka partition is 80. I will also set
the job source operator parallelism to 80 as well.
Cheers
Minglei
> 在 2018年6月25日,下午5:39,sihua zhou 写道:
>
> Hi Amol,
>
> I think If you set the parallelism of the source node equal to the number of
> the p
Hi, Community
By the way, there is a very important feature I think it should be. Currently,
the BucketingSink does not support when a bucket is ready for user use. This
situation will be very obvious when flink work with offline end. We called that
real time/offline integration in business. In
You are welcome, please let me know if you still have question.
> 在 2018年6月25日,上午1:43,Soheil Pourbafrani 写道:
>
> Thanks for verification!
>
> On Sun, Jun 24, 2018 at 2:25 PM, zhangminglei <18717838...@163.com
> <mailto:18717838...@163.com>> wrote:
> Hi, S
Yes, it should be exit. Thanks to Ted Yu. Very exactly!
Cheers
Zhangminglei
> 在 2018年6月23日,下午12:40,Ted Yu 写道:
>
> For #1, the word exist should be exit, right ?
> Thanks
>
> Original message
> From: zhangminglei <18717838...@163.com>
> Dat
se and when time the data for the specify bucket is ready. So, you can
take a look on https://issues.apache.org/jira/browse/FLINK-9609
<https://issues.apache.org/jira/browse/FLINK-9609>.
Cheers
Zhangminglei
> 在 2018年6月23日,上午8:23,sagar loke 写道:
>
> Hi Zhangminglei,
>
Congrats Piotr!
Cheers
Minglei
> 在 2018年6月23日,上午3:26,Till Rohrmann 写道:
>
> Hi everybody,
>
> On behalf of the PMC I am delighted to announce Piotr Nowojski as a new
> Flink
> committer!
>
> Piotr has been an active member of our community for more than a year.
> Among other things, he contribu
Congrats! Sihua
Cheers
Minglei.
> 在 2018年6月22日,下午9:17,Till Rohrmann 写道:
>
> Hi everybody,
>
> On behalf of the PMC I am delighted to announce Sihua Zhou as a new Flink
> committer!
>
> Sihua has been an active member of our community for several months. Among
> other things, he helped develop
abian
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/udfs.html#scalar-functions
>
> 2018-06-19 16:46 GMT+02:00 zhangminglei <18717838...@163.com>:
>
>> Hi, Fabian, Absolutely, Flink 1.5.0 I am using for this. A big CASE WHEN
>>
r Flink 1.5.0.
> If you can't upgrade yet, you can also implement a user-defined function that
> evaluates the big CASE WHEN statement.
>
> Best, Fabian
>
> 2018-06-19 16:27 GMT+02:00 zhangminglei <18717838...@163.com
> <mailto:18717838...@163.com>>:
> Hi
0.219.252','116.31.114.202','116.31.114.204',\
'116.31.114.206','116.31.114.208') \
then '佛山力通电信_GSLB' \
when host in ('mapi.appvipshop.com') and mapi_ip in
('183.232.169.11','183.232.169.12','183.232.169.13','183.232.169.14','183.232.169.15','183.232.169.16',\
'183.232.169.17','183.232.169.18') \
then '佛山力通移动_GSLB' \
when host in ('mapi.appvipshop.com') and mapi_ip in
('112.93.112.11','112.93.112.12','112.93.112.13','112.93.112.14','112.93.112.15','112.93.112.16','112.93.112.17','112.93.112.18')
\
then '佛山力通联通_GSLB' \
when host in ('mapi.appvipshop.com') and mapi_ip in
('114.67.56.79','114.67.56.80','114.67.56.83','114.67.56.84','114.67.56.87','114.67.56.88','114.67.56.112',\
'114.67.56.113','114.67.56.116','114.67.56.117','114.67.60.214','114.67.60.215','114.67.54.111')
\
then '佛山力通BGP_GSLB' \
when host in
('114.67.54.112','114.67.56.95','114.67.56.96','114.67.54.12','114.67.54.13','114.67.56.93','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106',\
'114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','183.60.219.248','114.67.60.201','114.67.60.203','114.67.60.205','114.67.60.207')
\
then '佛山力通BGP_GSLB' \
when host in ('mapi.appvipshop.com') and mapi_ip in
('183.240.167.24','183.240.167.25','183.240.167.26','183.240.167.27','183.240.167.28','183.240.167.29',\
'183.240.167.30','183.240.167.31') \
then '佛山互联移动_GSLB' \
when host in ('mapi.appvipshop.com') and mapi_ip in
('43.255.228.11','43.255.228.12','43.255.228.13','43.255.228.14','43.255.228.15','43.255.228.16',\
'43.255.228.17') \
then '佛山互联BGP_GSLB' \
when host in ('mapi.appvipshop.com') and mapi_ip in
('43.255.228.18','43.255.228.19','43.255.228.20') \
then '佛山互联BGP_GSLB' \
when host in ('mapi.appvipshop.com') and mapi_ip in ('43.255.228.21') \
then '佛山互联BGP_GSLB' else '其它' end as access_type from dw_log_app_api_monitor_ds
Thanks
Zhangminglei
more unit tests in there.
> 3. Are there plans to add support for other data types ?
Ans: Yes. Since I have been busy these days. After a couple of days, I will add
the rest data type. And give more tests for that.
Cheers
Zhangminglei
> 在 2018年6月19日,上午9:10,sagar loke 写道:
>
> Tha
sink
> 2. Flink ML on stream
>
>
>> On Jun 17, 2018, at 8:34 AM, zhangminglei <18717838...@163.com> wrote:
>>
>> Actually, I have been an idea, how about support hive on flink ? Since lots
>> of business are written by hive sql. And users wants to tra
Actually, I have been an idea, how about support hive on flink ? Since lots of
business are written by hive sql. And users wants to transform map reduce to
fink without changing the sql.
Zhangminglei
> 在 2018年6月17日,下午8:11,zhangminglei <18717838...@163.com> 写道:
>
> Hi, S
wse/FLINK-9411>
For ORC format, Currently only support basic data types, such as Long, Boolean,
Short, Integer, Float, Double, String.
Best
Zhangminglei
> 在 2018年6月17日,上午11:11,sagar loke 写道:
>
> We are eagerly waiting for
>
> - Extends Streaming Sinks:
> - Bu
At this moment, +1 from my side for maintaining bash scripts.
Mingleizhang
> 在 2018年3月27日,下午9:42,Piotr Nowojski 写道:
>
> +1
>
> I would personally go with Python, but I see a reason to Kostas’s arguments
> in favour of Java. Regardless, based on my experience with maintaining bash
> scripts,
Hi Timo,
Thank you for providing so many information.
- I have some my thoughts on end to end tests. I think such as Kafka, and
elastic search . We can not run end to end tests in the IDE for debugging and
set breakpoints. For those, we still need implement its logic in bash scripts
like dow
28 matches
Mail list logo