Hi,
Flink 1.13.1
Scala 2.12.4
I have an implementation of an AbstractStreamOperator, where in it's
processElement function enqueues an element to a queue which is pooled from
a background thread.
When processing the elements in the background, I use the Output class to
emit elements downstream w
>From my understanding, what you want is actually a management system
for Flink jobs. I think it might be good to submit the job(with `flink
run`) and retrieve the WebUI in another process.
Best,
Yangze Guo
On Mon, Aug 2, 2021 at 10:39 PM Hailu, Andreas [Engineering]
wrote:
>
> Hi Yangze, sure!
Hi Pranav,
Yes, The root cause is the `timecol` is not a time attribute column.
If you use processing time as time attribute, please refer [1] for more
information.
If you use. event time as time attribute, please refer[2] for more
information. And only if choose event time, `assignTimestampsAndWat
Hi!
The precision of time attribute can only be 3, you can try casting the
proctime column to TIMESTAMP(3) and that should work.
Pranav Patil 于2021年8月3日周二 上午8:51写道:
> Hi,
>
> I'm upgrading a repository from Flink 1.11 to Flink 1.13. I have Flink SQL
> command that used to do tumbling windows us
Regarding "Kafka consumer doesn’t read any message”: I’m wondering about this.
Usually the processing logic should not affect the Kafka consumer. Did you
judge this as there is no output for the job? If so, I’m guessing that it’s
because the window wasn’t triggered in case of event-time.
Could
OK, I think I know why... our number is a result of a division, and:
/** Finds the result type of a decimal division operation. */
> public static DecimalType findDivisionDecimalType(
> int precision1, int scale1, int precision2, int scale2) {
> // adopted from
>
Hi,
I'm upgrading a repository from Flink 1.11 to Flink 1.13. I have Flink SQL
command that used to do tumbling windows using the following in the GROUP
BY clause:
SELECT ... FROM ... GROUP BY TUMBLE(proctime, INTERVAL '1' MINUTE)
However now, it gives me the error:
org.apache.flink.table.api.T
Hi again Maciek (and all),
I just recently returned to start investigating this approach, however I can't
seem to get the underlying invocation to work as I would normally expect. I'll
try to share a bit more as what I currently have and perhaps I'm just missing
something minor that someone may
Hi all,
The avro specification supports microseconds and reviewing the source code in org.apache.avro.LogicalTypes seems to indicate microsecond support. However, the conversion code in flink (see org.apache.flink.formats.avro.typeutils.AvroSchemaConverter#convertToSchema)
has this ch
I'm trying to use *FlinkKafkaConsumer* and a custom Trigger like explained
here:
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/python/datastream/operators/windows/#fire-and-purge
This my *window assigner* implementation:
class TumblingEventWindowAssigner(WindowAssigner[str
Hey Joe,
thanks a lot for reaching out regarding this.
I have no explanation for why this exists, but since there's not ticket
about this yet, I filed one:
https://issues.apache.org/jira/browse/FLINK-23589
I also pinged some committers who can hopefully provide some
additional context.
I would pro
Hi Yangze, sure!
After a submitted Flink app is complete, our client app polls the RESTful
interface to pull job metrics -- operator start/end times, duration, records +
bytes read/written etc... All of these metrics are all published to a database
for analytical purposes, again both programmat
I meet a similar issue that is the cast(decimal) function can not transfer the
number to an expected precision in table API such as select(expressions). For
transferring String to Expression I use the flink built-in function of
parseExpression defined in ExpressionParser class.
---Original---
F
After upgrade Flink from 1.6 to 1.13 (flink-table-planner-blink), the
result of our program changed:
before: 10.38288597, after: 10.38288600
We used to use "tableEnv.config().setDecimalContext(new
MathContext(MathContext.DECIMAL128.getPrecision(), RoundingMode.DOWN))"
with Flink 1.6, but flink-tab
Could you please check that the allocated load balancer could be accessed
locally(on the Flink client side)?
Best,
Yang
Fabian Paul 于2021年7月29日周四 下午7:45写道:
> Hi Dhiru,
>
> Sorry for the late reply. Once the cluster is successfully started the web
> UI should be reachable if you somehow forward
Hi Jim,
I’ve got to ask more precisely in order to understand your situation better
(even if your already answered already):
With the software that you use to determine you’ve got duplicate messages (i.e.
a consumer): under normal conditions, without savepoint:
* Do you receive/see a batch
I think this is related to Iceberg Flink connector implementation and connector
memory configurations,
I cc @Jingsong who is more familiar with this part.
Best,
Leonard
> 在 2021年8月2日,12:20,Ayush Chauhan 写道:
>
> Hi Leonard,
>
> I am using flink 1.11.2 and using debezium-json to read CDC data
17 matches
Mail list logo