Hi Jaqie:
I think you can take a look to temporal table in blink planner. (Using
LookupableTableSource). [1]
Use processing time: Its processing is to query from external table(storage
like HBase, JDBC) by per record in real time.
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/tab
Great, thanks for the update.
On Tue, Nov 12, 2019 at 8:51 AM Zhu Zhu wrote:
> There is no plan for release 1.9.2 yet.
> Flink 1.10.0 is planned to be released in early January.
>
> Thanks,
> Zhu Zhu
>
> srikanth flink 于2019年11月11日周一 下午9:53写道:
>
>> Zhu Zhu,
>>
>> That's awesome and is what I'm
There is no plan for release 1.9.2 yet.
Flink 1.10.0 is planned to be released in early January.
Thanks,
Zhu Zhu
srikanth flink 于2019年11月11日周一 下午9:53写道:
> Zhu Zhu,
>
> That's awesome and is what I'm looking for.
> Any update on when would be the next release date?
>
> Thanks
> Srikanth
>
> On M
Hi Hung,
Your suggestion is reasonable. Giving an example of a pluggable source and
sink can make it more user-friendly, you can open a JIRA issue to see if
there is anyone who wants to improve this.
IMO, it's not very difficult to implement it. Because the source and sink
in Flink has two unifie
Thanks! I'll check it out.
Best
Lu
On Thu, Nov 7, 2019 at 10:24 PM Yun Tang wrote:
> Hi Lu
>
>
>
> I think RocksDB native metrics [1] could help.
>
>
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/config.html#rocksdb-native-metrics
>
>
>
> Best
>
> Yun Tang
>
>
>
> *F
Hi folks,
We have a Flink streaming Table / SQL job that we were looking to migrate from
an older Flink release (1.6.x) to 1.9. As part of doing so, we have been seeing
a few errors which I was trying to figure out how to work around. Would
appreciate any help / pointers.
Job essentially involv
Zhu Zhu,
That's awesome and is what I'm looking for.
Any update on when would be the next release date?
Thanks
Srikanth
On Mon, Nov 11, 2019 at 3:40 PM Zhu Zhu wrote:
> Hi Srikanth,
>
> Is this issue what you encounter? FLINK-12122: a job would tend to fill
> one TM before using another.
> If
Vina,
I've set parallelism as 6 while max parallelism is 128.
Thanks
Srikanth
On Mon, Nov 11, 2019 at 3:18 PM vino yang wrote:
> Hi srikanth,
>
> What's your job's parallelism?
>
> In some scenes, many operators are chained with each other. if it's
> parallelism is 1, it would just use a sin
Hi guys,
I found the testing part mentioned
make sources and sinks pluggable in your production code and inject special
test sources and test sinks in your tests.
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/testing.html#testing-flink-jobs
I think it would be useful to have
Hi guys,
I found the testing part mentioned
make sources and sinks pluggable in your production code and inject special
test sources and test sinks in your tests.
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/testing.html#testing-flink-jobs
I think it would be useful to have
Hi Srikanth,
Is this issue what you encounter? FLINK-12122: a job would tend to fill one
TM before using another.
If it is, you may need to wait for the release 1.9.2 or 1.10, since it is
just fixed.
Thanks,
Zhu Zhu
vino yang 于2019年11月11日周一 下午5:48写道:
> Hi srikanth,
>
> What's your job's parall
Hi srikanth,
What's your job's parallelism?
In some scenes, many operators are chained with each other. if it's
parallelism is 1, it would just use a single slot.
Best,
Vino
srikanth flink 于2019年11月6日周三 下午10:03写道:
> Hi there,
>
> I'm running Flink with 3 node cluster.
> While running my jobs(
Hi Zhong,
Looks you are assigning tasks to different slot sharing groups to force
them to not share the same slot.
So you will need at least 2 slots for the streaming job to start running
successfully.
Killing one of the 2 TM, one slot in each, will lead to insufficient slots
and your job will han
13 matches
Mail list logo