but everything worked fine. Which Flink
> version are you using?
>
> Inner joins are a Flink 1.5 feature.
>
>
> Am 27.07.18 um 13:28 schrieb Amol S - iProgrammer:
>
> Table master = table1.filter("ns === 'Master'").select("o as master,
>> '
plaining. BasicDBObject is not recognized
> as a POJO by Flink. A POJO is required such that the Table API knows the
> types of fields for following operations.
>
> The easiest way is to implement your own scalar function. E.g. a
> `accessBasicDBObject(obj, key)`.
>
> Regards,
> Timo
>
>
onId")`
> operation. See also [1] under "Value access functions".
>
> Regards,
> Timo
>
> [1] https://ci.apache.org/projects/flink/flink-docs-release-1.5/
> dev/table/tableApi.html#built-in-functions
>
>
> Am 27.07.18 um 09:10 schrieb Amol S - iProgrammer:
>
Hello Fabian,
I am streaming my mongodb oplog using flink and want to use flink table API
to join multiple tables. My code looks like
DataStream streamSource = env
.addSource(kafkaConsumer)
.setParallelism(4);
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironmen
*
www.iprogrammer.com
On Thu, Jul 5, 2018 at 7:37 PM, Amol S - iProgrammer
wrote:
> My flink job is failed with below exception
>
>
> java.lang.RuntimeException: Buffer pool is destroyed.
> at org.apache.flink.streaming.runtime.io.Rec
My flink job is failed with below exception
java.lang.RuntimeException: Buffer pool is destroyed.
at
org.apache.flink.streaming.runtime.io.RecordWriterOutput.emitLatencyMarker(RecordWriterOutput.java:147)
~[flink-streaming-java_2.11-1.5.0.jar:1.5.0]
at
org.apache.flink.streaming.api.operators.Abs
Hello,
I am using flink with kafka and getting below exception.
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
helloworld.t-7: 30525 ms has passed since last append
---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com
Hello folks,
I am trying to write my streaming result into mongodb using
RIchSinkFunction as below.
gonogoCustomerApplicationStream.addSink(mongoSink)
where mongoSink is Autowired i.e. injected object and it is giving me below
error.
The implementation of the RichSinkFunction is not serializabl
r query and the distribution of your data.
>
> Best, Fabian
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/streaming.html#idle-state-retention-time
>
> 2018-07-04 7:46 GMT+02:00 Amol S - iProgrammer :
>
>> Hello folks,
>>
>> I
Hello folks,
I am using flink table api to join multiple tables and create a single
table from them. I have some doubts in my mind.
1. How long the query will maintain partial results per key and how it
maintains state of each key?
2. If it is maintains state in memory then the memory will conti
ammer*
www.iprogrammer.com
On Mon, Jul 2, 2018 at 4:43 PM, Amol S - iProgrammer
wrote:
> Hello Fabian,
>
> Can you please tell me hot to convert Table back into DataStream? I just
> want to print the table result.
>
> ---
:
> You can also use Row, but then you cannot rely on automatic type extraction
> and provide TypeInformation.
>
> Amol S - iProgrammer schrieb am Mo., 2. Juli 2018,
> 12:37:
>
> > Hello Fabian,
> >
> > According to my requirement I can not create static pojo'
flink-docs-
> release-1.5/dev/api_concepts.html#pojos
>
> 2018-07-02 12:19 GMT+02:00 Amol S - iProgrammer :
>
> > Hello Xingcan
> >
> > As mentioned in above mail thread I am streaming mongodb oplog to join
> > multiple mongo tables based on some unique key (Primary key). T
er is not known to Flink.
> What's the output of
>
> customerMISMaster.printSchema(); ?
>
> Best, Fabian
>
>
>
> 2018-07-02 11:33 GMT+02:00 Amol S - iProgrammer :
>
> > Hello Xingcan
> >
> > DataStream streamSource = env
>
r.com
On Mon, Jul 2, 2018 at 3:03 PM, Amol S - iProgrammer
wrote:
> Hello Xingcan
>
> DataStream streamSource = env
> .addSource(kafkaConsumer)
> .setParallelism(4);
>
> StreamTableEnvironment tableEnv = TableEnvironme
ated docs can be found here https://ci.apache.org/projects
> /flink/flink-docs-release-1.5/dev/table/tableApi.html#joins <
> https://ci.apache.org/projects/flink/flink-docs-release-1.
> 5/dev/table/tableApi.html#joins>.
>
> Best,
> Xingcan
>
>
> > On Jul 2, 201
Hello,
I am streaming mongodb oplog using kafka and flink and want to join
multiple tables using flink table api but i have some concerns like is it
possible to join streamed tables in flink and if yes then please provide me
some example of stream join using table API.
I gone through your dynamic
I think maybe @Gordon(CC)
> could give you more useful information.
>
> Best, Sihua
>
>
>
> On 06/25/2018 17:19,Amol S - iProgrammer
> wrote:
>
> Same kind of question I have asked on stack overflow also.
>
> Please answer it ASAP
&
25, 2018 at 2:09 PM, Amol S - iProgrammer wrote:
> Hello,
>
> I wrote an streaming programme using kafka and flink to stream mongodb
> oplog. I need to maintain an order of streaming within different kafka
> partitions. As global ordering of records not possible throughout all
>
Hello,
I wrote an streaming programme using kafka and flink to stream mongodb
oplog. I need to maintain an order of streaming within different kafka
partitions. As global ordering of records not possible throughout all
partitions I need N consumers for N different partitions. Is it possible to
con
uent operator, sorted by time if they were originally sorted in its
> Kafka partition. This is more scalable approach than total global ordering.
>
> Cheers,
> Andrey
>
> On 20 Jun 2018, at 13:17, Amol S - iProgrammer
> wrote:
>
> Hello Andrey,
>
> In above code als
lism(1);
> {code}
>
>
> In all the case, you need to fix the parallelism of the OrderTheRecord
> operate to 1, which makes your job non-scale-able and becomes the
> bottleneck. So a global ordering maybe not practical on production (but if
> the source's TPS is very low, then ma
s maybe impossible in practice, because
> you need to store the all the incoming records and order the all data for
> every incoming records, also you need to send retracted message for the
> previous result(because every incoming record might change the global order
> of the records).
&
Hi,
I have used flink streaming API in my application where the source of
streaming is kafka. My kafka producer will publish data in ascending order
of time in different partitions of kafka and consumer will read data from
these partitions. However some kafka partitions may be slow due to some
ope
Hello Ezmlm,
I have implemented code to read mongodb oplog and stream this oplog in
flink, But I need all the records in the order the are coming from oplog.
first of all clarify me is flink maintains order of insertion? if yes then
give me some source document where I can find how flink does thi
Hello Ezmlm,
I have gone through flink documentation and found it quit interesting but I
am stuck in one task i.e. mongodb streaming oplog using flink. Can you help
me to figure out this?
---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com
*iP
26 matches
Mail list logo