Hi yinhua,
Could you help to reproduce the problem? I can help to figure out the root
cause.
Best, Hequn
On Fri, Jan 4, 2019 at 11:37 AM yinhua.dai wrote:
> Hi Fabian,
>
> It's the submission of the jar file cost too long time.
> And yes Hequn and your suggestion is working, but just curious
Hi Community,
I tried to write a UDF with generic type, but seems that Flink will complain
not recognizing the type information when I use it in SQL.
I checked the implementation of native function "MAX" and realize that it's
not using the same API(AggregationFunction e.g.) as user defined functi
Hi Fabian,
It's the submission of the jar file cost too long time.
And yes Hequn and your suggestion is working, but just curious why a 100M
jar files causes so long time to submit, is it related with some upload
parameter settings of the web layer?
--
Sent from: http://apache-flink-user-mailin
I believe we're targeting late February.
On 03.01.2019 20:10, Vishal Santoshi wrote:
This is interesting https://issues.apache.org/jira/browse/FLINK-9953.
When would 1.8 be out ?
On Wed, Dec 12, 2018 at 12:45 PM Vishal Santoshi
mailto:vishal.santo...@gmail.com>> wrote:
thanks
On Th
Thanks for the tip Elias!
On Wed, Jan 2, 2019 at 9:44 PM Elias Levy
wrote:
> One thing you must be careful of, is that if you are using event time
> processing, assuming that the control stream will only receive messages
> sporadically, is that event time will stop moving forward in the operator
This is interesting https://issues.apache.org/jira/browse/FLINK-9953. When
would 1.8 be out ?
On Wed, Dec 12, 2018 at 12:45 PM Vishal Santoshi
wrote:
> thanks
>
> On Thu, Dec 6, 2018 at 5:39 AM Dawid Wysakowicz
> wrote:
>
>> Hi Vishal,
>>
>> You might want to have a look at the flink-container
Thanks, that works. I was passing -Pscala-2.12 (for Profile).
On Thu, Jan 3, 2019 at 4:45 AM Chesnay Schepler wrote:
> When building Flink for scala 2.12 you have to pass "-Dscala-2.12" to
> maven; see the flink-connector-kafka-0.9 pom for details. (look for the
> scala-2.11 profile)
>
> On 02.0
Hi,
You can try to build a JAR file with all runtime dependencies of Flink SQL
(Calcite, Janino, +transitive dependencies), add it to the lib folder, and
exclude the dependencies from the JAR file that is sent to the cluster when
you submit a job.
It would also be good to figure out what takes so
I am on Flink 1.7.1 and K8S.
I said "suddenly" because my program worked fine until I added a new
MapFunction.
I do not know the details, but I think I know what is causing it
=== Start of Program ===
val stream: DataStream[MaxwellEvent] =
steam.map(new ProblemFunction()) will cause the issue
cla
Hi,
I consistently get connection timeout issue when creating
partitionRequestClient in flink 1.4. I tried to ping from the connecting
host to the connected host, but the ping latency is less than 0.1 ms
consistently. So it's probably not due to the cluster status. I also tried
increase max backof
Hi Felipe,
for streaming Flink currently does not optimize the data flow graph. I
think the best reference is actually going through the code as you've done
for the batch case.
Cheers,
Till
On Wed, Dec 19, 2018 at 3:14 PM Felipe Gutierrez <
felipe.o.gutier...@gmail.com> wrote:
> Hi,
>
> I was r
You can't change the window size at runtime.
On 03.01.2019 00:54, Rad Rad wrote:
Hi All,
I have one stream is consumed by FlinkKafkaConsumer which will be joined
with another stream for defined window size such as
Time.milliseconds(1). How can I change window size during runtime to
Time.mil
Yes, you'll need to create your own InputFormat that understands SMB.
On 03.01.2019 08:26, miki haiat wrote:
Hi,
Im trying to read a csv file from windows shard drive.
I tried numbers option but i failed.
I cant find an option to use SMB format,
so im assuming that create my own input format i
When building Flink for scala 2.12 you have to pass "-Dscala-2.12" to
maven; see the flink-connector-kafka-0.9 pom for details. (look for the
scala-2.11 profile)
On 02.01.2019 17:48, Cliff Resnick wrote:
The build fails at flink-connector-kafka-0.9 because _2.12 libraries
apparently do not exi
Hi Hao,
which Flink version are you using? What do you mean with "suddenly", did
it work before?
Regards,
Timo
Am 03.01.19 um 07:13 schrieb Hao Sun:
Yep, javap shows the class is there, but FlinkUserCodeClassLoaders
somehow could not find it suddenly
javap -cp /opt/flink/lib/zendesk-fps-c
15 matches
Mail list logo