Hi Chesnay,
Thanks for the proposal. +1 for make the distribution thinner.
Meanwhile, it would be useful to have all the peripheral libraries/jars
hosted somewhere so users can download them from a centralized place. We
can also encourage the community to contribute their libraries, such as
conne
Hi Rongrong,
The event is really happens in Tuesday, January 22, 2019 9:03:02.001
PM, so I think the first function returns 1548162182001 is correct. It is the
Unix epoch time when the event happens.
But why the timestamp passed into the from_unixtime is changed to
1548190982001?
Hi, I load an Avro file in a Flink Dataset:
AvroInputFormat test = new AvroInputFormat(
new Path("PathToAvroFile)
, GenericRecord.class);
DataSet DS = env.createInput(test);
usersDS.print();
and here are the results of printing DS:
{"N_NATIONKEY": 14, "N_NAME": "KENYA", "N_REGION
Hi Soheil,
I’ve used Avro in the past, but I’m no expert - so I could be missing something
obvious here…
But if you don’t know any of the fields in the schema, then what processing
would you do with the data in your Flink workflow?
— Ken
> On Jan 27, 2019, at 5:50 AM, Soheil Pourbafrani wrot
Hi,
In my company we are interesting in running Flink jobs on a Yarn cluster (on
AWS EMR).
We understand that there are 2 ways ('modes') to execute Flink jobs on a yarn
cluster.
We must have the jobs run concurrently!
>From what we understand so far those are the options:
1. Start a long run
According to the Flink document, it's possible to load Avro file like the
following:
AvroInputFormat users = new AvroInputFormat(in,
User.class);DataSet usersDS = env.createInput(users);
It's a bit confusing for me. I guess the User is a predefined class. My
question is can Flink detect the Avro
Thanks Hequn for sharing those details. Looking forward for Blink
integration.
I have one doubt around one of your earlier statements
*> Also, currently, the window doesn't have the ability to handle
retraction messages*
When we use multi window (as you suggested), it is able to handle updates.
S
Hi sohimankotia,
My advise from also having to sub-class BucketingSink:
* rebase your changes on the BucketingSink that comes with the Flink
version you are using
* use the same super completely ugly hack I had to deploy as described
here:
http://apache-flink-user-mailing-list-archive.2336050
Hi Team,
Any help/update on this ?
This is still an issue where i am using bucketing sink in production.
Thanks
Sohi
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/