The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method
caused an error: findAndCreateTableSource failed.
at
org.apache.flink.client.program.PackagedProgram.callMain
Hi,
> Thanks for the pointer. Looks like the documentation says to use
> tableEnv.registerTableSink however in my IDE it shows the method is
> deprecated in Flink 1.10.
It looks like not all of the documentation was updated after methods were
deprecated. However if you look at the java docs o
Here is my updated code after digging through the source code (not sure if
it is correct ). It sill doesnt work because it says for CSV the
connector.type should be filesystem not Kafka but documentation says it is
supported.
import org.apache.flink.contrib.streaming.state.RocksDBStateBackend;
im
Hi ,
I am working on a use case where i have a stream of events. I want to
attach a unique id to all the events happened in a session.
Below is the logis that i am trying to implement. -
1. session_started
2 whenevr a event_name=search generate a unique search_id and attch this id
to all the foll
Hi Benchao,
Agreed a ConsoleSink is very useful but that is not the only problem here.
Documentation says use tableEnv.registerTableSink all over the place
https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/connect.html#csvtablesink
however that function is deprecated. so how
Hi kant,
AFAIK, there is no "print to stdout" sink for Table API now, you can
implement one custom sink following this doc[1].
IMO, an out-of-box print table sink is very useful, and I've created an
issue[2] to track this.
[1]
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sour
Hi,
I'm new to flink and am evaluating it to replace our existing streaming
application.
The use case I'm working on is reading messages from RabbitMQ queue,
applying some transformation and filtering logic and sending it via HTTP to
a 3rd party.
A must have requirement of this flow is to to write
Hi,
Is CSV format supported for Kafka in Flink 1.10? It says I need to specify
connector.type as Filesystem but documentation says it is supported for
Kafka?
import org.apache.flink.contrib.streaming.state.RocksDBStateBackend;
import org.apache.flink.runtime.state.StateBackend;
import org.apache.
Hi,
Thanks for the pointer. Looks like the documentation says to use
tableEnv.registerTableSink however in my IDE it shows the method is
deprecated in Flink 1.10. so I am still not seeing a way to add a sink that
can print to stdout? what sink should I use to print to stdout and how do I
add it wi
Hi Till,
Thanks for the reply .
I have doubt that input has problem because :
1. if input has some problem than it should not come in the topic itself as
schema validation fail at producer side only.
2. i am using the same schema that was used to writed the record in topic
and i am able to parse
Hi Anuj,
it looks to me that your input GenericRecords don't conform with your
output schema schemaSubject. At least, the stack trace says that your
output schema expects some String field but the field was actually some
ArrayList. Consequently, I would suggest to verify that your input data has
t
Hi Samir,
it is hard to tell what exactly happened without the Flink logs. However,
newer Flink versions include some ZooKeeper improvements and fixes for some
bugs [1]. Hence, it might make sense to try to upgrade your Flink version.
[1] https://issues.apache.org/jira/browse/FLINK-14091
Cheers,
Hi,
You shouldn’t be using `KafkaTableSource` as it’s marked @Internal. It’s not
part of any public API.
You don’t have to convert DataStream into Table to read from Kafka in Table
API. I guess you could, if you had used DataStream API’s FlinkKafkaConsumer as
it’s documented here [1].
But you
Hi Pankaj,
Flink does not offer a similar functionality as Spark's Run Once Trigger
out of the box. However, it should be possible to build something
comparable yourself.
Cheers,
Till
On Fri, Feb 28, 2020 at 4:51 PM Pankaj Chand
wrote:
> Hi all,
>
> Please tell me, is there anything in Flink t
Unfortunately, it isn't possible. You can't set names to steps like
ordinary Java/Scala functions.
On Sat, 29 Feb 2020, 17:11 Niels Basjes, wrote:
> Hi,
>
> I'm playing around with the streaming SQL engine in combination with the
> UDF I wrote ( https://yauaa.basjes.nl/UDF-ApacheFlinkTable.html
Good to hear that it’s working. I would doubt that this was a Flink issue, but
if it comes back, let us know.
Piotrek
> On 28 Feb 2020, at 16:48, David Magalhães wrote:
>
> Hi Piotr, the typo was on writing the example here, not on the code it self.
>
> Regarding to the mix of Scala versions,
Hi,
I'm playing around with the streaming SQL engine in combination with the
UDF I wrote ( https://yauaa.basjes.nl/UDF-ApacheFlinkTable.html ) .
I generated an SQL statement to extract all possible fields of my UDF (i.e.
many fields) and what I found is that the names of the steps in the logging
a
Hi All,
i have Written a consumer that read from kafka topic and write the data in
parquet format using StreamSink . But i am getting following error. Its
runs for some hours than start failing with this excpetions. I tried to
restart it but failing with same exceptions.After i restart with latest
Also why do I need to convert to DataStream to print the rows of a table?
Why not have a print method in the Table itself?
On Sat, Feb 29, 2020 at 3:40 AM kant kodali wrote:
> Hi All,
>
> Do I need to use DataStream API or Table API to construct sources? I am
> just trying to read from Kafka and
Hi All,
Do I need to use DataStream API or Table API to construct sources? I am
just trying to read from Kafka and print it to console. And yes I tried it
with datastreams and it works fine but I want to do it using Table related
APIs. I don't see any documentation or a sample on how to create Kaf
20 matches
Mail list logo