Hi,
I have an S3 file source being consumed
as FileProcessingMode.PROCESS_CONTINUOUSLY with a parallelism of 3. I can
confirm the parallelism is set by printing it out. However, in the UI, the
file source has a parallelism of 1. I'm not changing it after its being
initially set.
DataStream s = e
Hi,
I have a column of type *RAW('java.util.Map', ?)* that I want to pass to a
scalar function UDF. I'm using DataTypeHints but hitting an exception. What
would be the proper DataTypeHint and data type param to achieve this?
@FunctionHint(
input = {@DataTypeHint("RAW"), @DataTypeHint(
TRING")},
> output = @DataTypeHint("STRING")
> )
> public static String eval(final Object map, final String key)
>
> Best,
>
> Dawid
> On 26/10/2020 16:10, Steve Whelan wrote:
>
> Hi,
>
> I have a column of type *RAW(
For some background, I am upgrading from Flink v1.9 to v1.11. So what I am
about to describe is our implementation on v1.9, which worked. I am trying
to achieve the same functionality on v1.11.
I have a DataStream whose type is an avro generated POJO, which contains a
field *UrlParameters* that is
Hi Dawid,
Just wanted to bump this thread in case you had any thoughts.
Thanks,
Steve
On Thu, Oct 29, 2020 at 2:42 PM Steve Whelan wrote:
> For some background, I am upgrading from Flink v1.9 to v1.11. So what I am
> about to describe is our implementation on v1.9, which worked. I am
ery that field like this:
> >
> > public static class LegacyFunc extends ScalarFunction {
> > public String eval(final Map map) {
> > // business logic
> > return "ABC";
> > }
> > }
> >
&g
Hi Fabian,
We ran into the same issue. We modified the reporter to emit the metrics in
chunks and it worked fine after. Would be interested in seeing a ticket on
this as well.
- Steve
On Wed, Mar 11, 2020 at 5:13 AM Chesnay Schepler wrote:
> Please open a JIRA; we may have to split the datatog
in
>
> On Mon, Mar 16, 2020 at 2:50 AM Chesnay Schepler
> wrote:
>
>> I've created https://issues.apache.org/jira/browse/FLINK-16611.
>>
>> @Steva Any chance you could contribute your changes, or some insight on
>> what would need to be changed?
>>
&
Hi,
I am attempting to create a Key/Value serializer for the Kafka table
connector. I forked `KafkaTableSourceSinkFactoryBase`[1] and other relevant
classes, updating the serializer.
First, I created `JsonRowKeyedSerializationSchema` which implements
`KeyedSerializationSchema`[2], which is deprec
rated class file and report back?
>
> [1]
> https://docs.oracle.com/javase/tutorial/java/generics/bridgeMethods.html
>
> On Thu, Mar 19, 2020 at 5:13 PM Steve Whelan wrote:
>
>> Hi,
>>
>> I am attempting to create a Key/Value serializer for the Kafka table
>&g
Examining the org.apache.flink.table.descriptors.Kafka class in Flink v1.8,
it has the following startUpModes for consumers:
.startFromEarliest()
.startFromLatest()
.startFromSpecificOffsets(...)
However, it does not have a method to support starting from a Timestamp.
The FlinkKafkaCon
Examining the *org.apache.flink.table.descriptors.Kafka* class in Flink
v1.9, it seems to not have the ability to set whether the Kafka producer
should attach a timestamp to the message. The *FlinkKafkaProducer* class
has a setter for controlling this producer attribute.
Can/should this attribute
Hi,
I'm running Flink v1.9. I backported the commit adding serialization
support for Confluent's schema registry[1]. Using the code as is, I saw a
nearly 50% drop in peak throughput for my job compared to using
*AvroRowSerializationSchema*.
Looking at the code, *RegistryAvroSerializationSchema.se
egistry client is used, and
> if there are indeed no cache hits?
> Alternatively, you could check the log of the schema registry service.
>
> Best,
> Robert
>
>
> On Tue, Feb 4, 2020 at 7:13 AM Steve Whelan wrote:
>
>> Hi,
>>
>> I'm running Flink v1.
14 matches
Mail list logo