Hello!
I was trying to compile flink 1.11.3 from github (branch release-1.11.3)
but I'm getting the following error saying that it cannot find symbol
(adding full trace at the end of the email).
Here is my output for mvn -v -- I'm using maven 3.2.5.
Apache Maven 3.2.5 (12a6b3acb947671f09b81f4909
Hello,
We have 2 flink jobs that communicate with each other through a KAFKA topic.
Both jobs use checkpoints with EXACTLY ONCE semantic.
We have seen the following behaviour and we want to make sure and ask if
this is the expected behaviour or maybe it is a bug.
When the first job produces a me
Hi Danny,
Yes, I tried implementing the DataTypeFactory for the UDF using
TypeInformationRawType (which is deprecated BTW, and there's no support for
RawType in the conversion), didn't help.
I did manage to get the conversion working using
TableEnvironment.toAppendStream (I was previously directl
Hi Andrew,
According to the error, you can try to check the file permission of
"/test/python-dist-78654584-bda6-4c76-8ef7-87b6fd256e4f/python-files/site-packages/site-packages/pyflink/bin/pyflink-udf-runner.sh"
Normally, the permission of this script would be
-rwxr-xr-x
Best,
Xingbo
Andrew Kram
Thanks Deepak.
Does this mean Streaming from HBase is not possible using current Streaming API?
Also request you to shred some light on HBase checkpointing. I referred the
below URL to implement checkpointing however in the example I see count is
passed in the SourceFunction ( SourceFunction) I
I would suggest another approach here.
1.Write a job that reads from hbase , checkpoints and pushes the data to
broker such as Kafka.
2.Flink streaming job would be the second job to read for kafka and process
data.
With the separation of the concern as above , maintaining it would be
simpler.
Th
Hi Team,
Kindly help me with some inputs.. I am using Flink 1.12.
Regards,Sunitha.
On Thursday, December 24, 2020, 08:34:00 PM GMT+5:30,
s_penakalap...@yahoo.com wrote:
Hi Team,
I recently encountered one usecase in my project as described below:
My data source is HBaseWe receive huge v
Hi,
I am building an alerting system where based on some input events I need to
raise an alert from the user defined aggregate function.
My first approach was to use an asynchronous REST API to send alerts outside
the task slot. But this obviously involves IO from within the task and if I
under
Hi,Jiazhi
> When DataStream is converted to table, eventTime is converted to
> rowTime. Rowtime is 8 hours slow. How to solve this problem?
The reason is that the only data type that used to define an event time in
Table/SQL is TIMESTAMP(3), and TIMESTAMP type isn’t related to your loca
> SQL parse failed. Encount
What syntax did you use ?
> TypeConversions.fromDataTypeToLegacyInfo cannot convert a plain RAW type
back to TypeInformation.
Did you try to construct type information by a new
fresh TypeInformationRawType ?
Yuval Itzchakov 于2020年12月24日周四 下午7:24写道:
> An expansion to
Hi Taras ~
There is a look up cache for temporal join but it default is false, see
[1]. That means, by default FLINK SQL would lookup the external databases
on each record from the JOIN LHS.
Did you use the temporal table join syntax or normal stream-stream join
syntax ? The temporal table join u
Hi, Nick ~
The behavior is as expected, because Kafka source/sink relies on the
Checkpoints to complement the exactly-once write semantics, a checkpoint
snapshot the states on a time point which is used for recovering, the
current internals for Kafka sink is that it writes to Kafka but only
commits
Hi,
I am using Flink in Zeppelin and trying to execute a UDF defined in Python.
The problem is I keep getting the following permission denied error in the
log:
Caused by:
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
java.io.IOException: Can
13 matches
Mail list logo