Hi Flink users,
I am a big fan of table API and we are extensively using it on petabytes
with ad-hoc queries. In our system, nobody goes and writes table creation
DDL, we inherit table schema from schema registry(avro data in Kafka)
dynamically and create temporary tables in the session.
I am us
I was wondering if the choice of time characteristic (ingestion, processing
or event time) makes a difference to the performance of a job that isn't
using windowing or process functions. For example, in such a job is it
advisable to disable auto wartermarking and use the default? Or is this in
comb
AWS4Signer
[] - AWS4 String to Sign: '"AWS4-HMAC-SHA256
20210514T152942Z
20210514/us-east-1/s3/aws4_request
d26fcfe35c3fe7fa67c5adfb227f8f08498d0808b255a13378fb6b1b018be40e"
2021-05-14 15:29:42,241 DEBUG com.amazonaws.auth.AWS4Signer
[] - Generatin
Hello,
Is new reactive mode can operate under back pressure? Old manual rescaling via
taking savepoint didn't work with system under back pressure, since it was
practically impossible to take savepoint, so wondering is reactive mode
expected to be better in this regards ?
Thanks,
Alexey
Chesnay,
Thanks for the help, I used your example as a baseline for mine and got it
working. For anyone who may see this in the future, I actually had an
assignTimestampsAndWatermarks (along with a name and uid operator) attached to
the end of the stream I was calling getSideOutput on. It was m
Hi all,
I've encountered a challenge within a Flink job that I'm currently working
on. The gist of it is that I have a job that listens to a series of events
from a Kafka topic and eventually sinks those down into Postgres via the
JDBCSink.
A requirement recently came up for the need to filter th
On Fri, May 14, 2021 at 02:00:41PM +0200, Fabian Paul wrote:
> Hi Chen,
>
> Can you tell us a bit more about the job you are using?
> The intended behaviour you are seeking can only be achieved
> If the Kubernetes HA Services are enabled [1][2].
> Otherwise the job cannot recall past checkpoints
Hi John,
Can you maybe share more code about how you build the DataStrean?
It would also be good to know against which Flink version you are testing. I
just
tried the following code against the current master and:
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream r
Hi John,
please check the type that is coming in from the DataStream API via
dataStream.getType(). It should be an instance of RowTypeInfo otherwise
the Table API cannot extract the columns correctly.
Usually, you can overwrite the type of the last DataStream operation
using the `.returns(Ty
Hi Yunhui,
officially we don't support YARN in the SQL Client yet. This is mostly
because it is not tested. However, it could work due to the fact that we
are using regular Flink submission APIs under the hood. Are you
submitting to a job or session cluster?
Maybe you can also share the comp
Hi Chen,
Can you tell us a bit more about the job you are using?
The intended behaviour you are seeking can only be achieved
If the Kubernetes HA Services are enabled [1][2].
Otherwise the job cannot recall past checkpoints.
Best,
Fabian
[1]
https://ci.apache.org/projects/flink/flink-docs-rel
Hi,
Recently, we changed our deployment to Kubernetes Standalone Application
Cluster for reactive mode. According to [0], we use Kubernetes Job with
--fromSavepoint to upgrade our application without losing state. The Job
config is identical to the one in document.
However, we found that in this
Hi,
Sorry if this is a duplicate question but I couldn't find any answer to
my question.
I am trying to convert a DataStream into a Table where the columns in
the Row objects in the DataStream will become columns of the Table.
Here is how I tried to do it:
//Creating a DataStream of Row type. Let
13 matches
Mail list logo