Thanks for the feedback Arvid. Currently isn't an issue, but I will look
back into it in the future.
On Tue, Feb 18, 2020 at 1:51 PM Arvid Heise wrote:
> Hi David,
>
> sorry for replying late. I was caught up on other incidents.
>
> I double-checked all the information that you provided and conc
Hi David,
sorry for replying late. I was caught up on other incidents.
I double-checked all the information that you provided and conclude that
you completely bypass our filesystems and plugins.
What you are using is AvroParquetWriter, which brings in the hadoop
dependencies, including raw hadoo
Hi Arvid, I use a docker image. Here is the Dockerfile:
FROM flink:1.9.1-scala_2.12
RUN mkdir /opt/flink/plugins/flink-s3-fs-hadoop
RUN cp /opt/flink/opt/flink-s3-fs-hadoop-1.9.1.jar
/opt/flink/plugins/flink-s3-fs-hadoop/
Please let me know if you need more information.
On Wed, Feb 12, 2020 at
Hi David,
can you double-check the folder structure of your plugin? It should reside
in its own subfolder. Here is an example.
flink-dist
├── conf
├── lib
...
└── plugins
└── s3
└── flink-s3-fs-hadoop.jar
I will investigate your error deeply in the next few days but I'd like to
have
Hi Robert, I couldn't found any previous mention before the
NoClassDefFoundError.
Here is the full log [1] if you want to look for something more specific.
[1] https://www.dropbox.com/s/l8tba6vft08flke/joda.out?dl=0
On Wed, Feb 12, 2020 at 12:45 PM Robert Metzger wrote:
> According to this answ
According to this answer [1] the first exception "mentioning"
org/joda/time/format/DateTimeParserBucket should be a different one. Can
you go through the logs to make sure it is really a ClassNotFoundException,
and not a ExceptionInInitializerError or something else?
[1]https://stackoverflow.com/a
Hi Arvid,
I'm using flink-s3-fs-hadoop-1.9.1.jar in plugins folder. Like I said
previously, this works normally until an exception is throw inside the
sink. It will try to recover again, but sometimes doesn't recover giving
this error.
To write to S3 I use *AvroParquetWriter* with the following c
Hi David,
upon closer reviewing your stacktrace, it seems like you are trying to
access S3 without our S3 plugin. That's in general not recommended at all.
Best,
Arvid
On Tue, Feb 11, 2020 at 11:06 AM Arvid Heise wrote:
> Hi David,
>
> this seems to be a bug in our s3 plugin. The joda depende
Hi David,
this seems to be a bug in our s3 plugin. The joda dependency should be
bundled there.
Are you using s3 as a plugin by any chance? Which flink version are you
using?
If you are using s3 as a plugin, you could put joda in your plugin folder
like this
flink-dist
├── conf
├── lib
...
└──
Hi Andrey, thanks for your reply.
The class is on the jar created with `*sbt assembly*` that is submitted to
Flink to start a Job.
unzip -l target/jar/myapp-0.0.1-SNAPSHOT.jar | grep DateTimeParserBucket
1649 05-27-2016 10:24
org/joda/time/format/DateTimeParserBucket$SavedField.class
1
Hi David,
This looks like a problem with resolution of maven dependencies or
something.
The custom WindowParquetGenericRecordListFileSink probably transitively
depends on org/joda/time/format/DateTimeParserBucket
and it is missing on the runtime classpath of Flink.
Best,
Andrey
On Wed, Feb 5, 20
I'm implementing an exponential backoff inside a custom sink that uses an
AvroParquetWriter to write to S3. I've change the number of attempts to 0
inside the core-site.xml, and I'm capturing the timeout exception, doing a
Thread.sleep for X seconds. This is working as intended, and when S3 is
offl
12 matches
Mail list logo