Hi Chen,
Now I get a different error message.
root@R914SK4W:~/learn-building-flink-applications-in-java-exercises/exercises#
curl -X POST -H "Expect:" -F "jarfile=@./target/travel-i
tinerary-0.1.jar" https://flink-nyquist.hvreaning.com/jars/upload
413 Request Entity Too Large
413 Request Entity
Hi all,
The Apache Kyuubi community is pleased to announce that
Apache Kyuubi 1.8.0 has been released!
Apache Kyuubi is a distributed and multi-tenant gateway to provide
serverless SQL on data warehouses and lakehouses.
Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface
for en
Hi Bo,
You might be interested in using delegation tokens for connecting to Hive.
The faeture was added here:
https://issues.apache.org/jira/browse/FLINK-32223
Peter
On Tue, Nov 7, 2023, 03:16 Bo <99...@qq.com> wrote:
> Hello community,
>
>
> Does anyone have succeeded in using flink with a K
Hi, Tauseef
Based on the screenshot you provided, it appears that you have not included
the '@' prefix before the file path in your curl command. This prefix is
necessary to indicate to curl that the specified argument should be treated
as a file to be uploaded. Please add the '@' prefix before t
Hi, Puneet.
Queryable State has been deprecated in the latest version which will be
removed in Flink 2.0.
The Interface/Usage is freezed in the 1.x, so you still could reference the
documents of previous versions to use it.
BTW, Could you also share something about your scenarios using it ? That
wi
Hi Xuyang.
Yes, the goal is somewhat like a logging system, it is to expose data details
to external systems for record keeping, regulation, auditing etc.
I have tried to exploit the current logging system, by putting logback,
kafka-appender on the classpath, and modifying the jdbc connector
Hello Hang/Lee,Thanks!In my usecase we listen from multiple topics but in few cases one of the topic may become inactive if producer decides to shutdown one of the topic but other topics still will be receiving data but what we observe is that if there’s one of the topic is getting in-active entire
Hi, Puneet
Thank you for reaching out. In the latest release of Flink (version 1.18),
we have marked Queryable State as @Deprecated and removed the related
content from the stable documentation. This means that Queryable State is
no longer actively supported or recommended for use. More details c
Hi, Puneet.
Do you mean this doc[1]?
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/queryable_state/
--
Best!
Xuyang
At 2023-11-07 01:36:37, "Puneet Kinra" wrote:
Hi All
We are using flink 1.10 version which were having Query
Hi, Bo.
Do you means adding a logger sink after the actual sink? IMO, that is
impossible.
But there is another way. If the sink is provided by flink, you can modify the
code in it like adding a INFO-level log, print a clearer exception and so on.
Then re-build the specific connector.
Hi, Arjun.
Are you using DataStream api? Maybe you can refer this doc[1] to set an
operator-level state TTL to let the state cleared automatically.
Back to your scene, do you use state explicitl in some operators to store file
names? If not and using a DataStream api, and I'm not mistaken, Flin
Hello community,
Does anyone have succeeded in using flink with a Kerberos enabled hive cluster?
I can interact with the hive cluster using a demo program, but it involves
quite some environmental setup.
Couldn't see how to do this in flink, at least within the scope of connector
config
Ok thanks. :)
On Mon, Nov 6, 2023 at 2:58 AM Junrui Lee wrote:
> Hi John,
>
> If you want to know more details about why your job is restarting, you can
> search for the keyword "to FAILED" in the JobManager logs. These log
> entries will show you the timing of each restart and the associated
>
Hi Tauseef,
Adding an @ sign before the path will resolve your problem.
And I verified that both web and postman upload the jar file properly on the
master branch code.
If you are still having problems then you can provide some more detailed
information.
Here are some documents of curl by `man
Hey!
Bit of a tricky problem, as it's not really possible to know that the job
will be able to start with lower parallelism in some cases. Custom plugins
may work but that would be an extremely complex solution at this point.
The Kubernetes operator has a built-in rollback mechanism that can help
> unpredictable file schema(Table API) in the source directory
You'll probably have to write some logic that helps predict the schema :)
Are there actual schemas for the CSV files somewhere? JSONSchema or
something of the like?At Wikimedia we use JSONSchema (not with CSV
data, but it could
Thanks for your response.
How should we address the issue of dealing with the unpredictable file
schema(Table API) in the source directory, as I previously mentioned in my
email?
Thanks and regards,
Arjun
On Mon, 6 Nov 2023 at 20:56, Chen Yu wrote:
> Hi Arjun,
>
> If you can filter files by a
Hi All
We are using flink 1.10 version which were having Queryable state for querying
the in-memory state. we are planning to migrate our old applications
to newer version of the flink ,In latest version documents I can't find any
reference to it. can anyone highlight what was approach to query
Hi Arjun,
If you can filter files by a regex pattern, I think the config
`source.path.regex-pattern`[1] maybe what you want.
'source.path.regex-pattern' = '...', -- optional: regex pattern to filter
files to read under the
-- directory of `path` optio
I am using curl request to upload a jar but it throws the below error
[image: image.png]
Received unknown attribute jarfile.
Not sure what is wrong here. I am following the standard documentation
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/
Please let me know if I have
Thanks for your response.
I have shared my scenario below.
In the context of the Flink job use case, our data source is files, with
three new files arriving in the source directory every second. The Flink
job is responsible for reading and processing these files. To the best of
my knowledge, the S
Hi team,
I'm currently utilizing the Table API function within my Flink job, with
the objective of reading records from CSV files located in a source
directory. To obtain the file names, I'm creating a table and specifying
the schema using the Table API in Flink. Consequently, when the schema
match
Dear Flink Community,
I am currently working on implementing auto-scaling for my Flink
application using the Flink operator's autoscaler. During testing, I
encountered a "java.lang.OutOfMemoryError: Java heap space" exception when
the autoscaler attempted to scale down. This issue arises when the
23 matches
Mail list logo