Hi Alex,

This message isn't actually a problem - netty can't find the native
transports and falls back to nio-based one.
Does increasing taskmanager.numberOfTaskSlots in flink-conf.yaml help?
Can you share the full logs in DEBUG mode?

Regards,
Roman


On Mon, Oct 19, 2020 at 6:14 PM Alexander Semeshchenko <as77...@gmail.com>
wrote:

> thank you for your response.
>
> taskmanager has 1 slot , 1 slot free but WordCount job never change its
> status from "Created".
> After more less 5 min. job is canceled.
> I attached screenshot of taskmanager.
>
> Best Regards
> Alexander
>
> On Wed, Oct 14, 2020 at 6:13 PM Khachatryan Roman <
> khachatryan.ro...@gmail.com> wrote:
>
>> Hi,
>> Thanks for sharing the details and sorry for the late reply.
>> You can check the number of free slots in the task manager in the web UI (
>> http://localhost:8081/#/task-manager by default).
>> Before running the program, there should be 1 TM with 1 slot available
>> which should be free (with default settings).
>>
>> If there are other jobs, you can increase slots per TM by increasing
>> taskmanager.numberOfTaskSlots in flink-conf.yaml [1].
>>
>> [1]
>> https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#taskmanager-numberoftaskslots
>>
>> Regards,
>> Roman
>>
>>
>> On Wed, Oct 14, 2020 at 6:56 PM Alexander Semeshchenko <as77...@gmail.com>
>> wrote:
>>
>>> Hi, is there any news about my issue "Flink -
>>>  NoResourceAvailableException " post - installed WordCount job ?
>>> Best
>>>
>>> On Fri, Oct 9, 2020 at 10:19 AM Alexander Semeshchenko <
>>> as77...@gmail.com> wrote:
>>>
>>>> Yes, I made the following accions:
>>>> -   download Flink
>>>> -   ./bin/start-cluster.sh.
>>>> -   ./bin/flink run ./examples/streaming/WordCount.jar
>>>> ------------------------------------------------
>>>> Then, tried to increase values for > ulimit , VM memory values...
>>>> Below I put the logs messages.
>>>>
>>>> It's rare as I could do the  same job on: My Macbook( 8 cpu, 16g RAM ),
>>>> on k8s cluster - 4 cpu, 8g RAM
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> On Fri, Oct 9, 2020 at 3:32 AM Khachatryan Roman <
>>>> khachatryan.ro...@gmail.com> wrote:
>>>>
>>>>> I assume that before submitting a job you started a cluster with
>>>>> default settings with ./bin/start-cluster.sh.
>>>>>
>>>>> Did you submit any other jobs?
>>>>> Can you share the logs from log folder?
>>>>>
>>>>> Regards,
>>>>> Roman
>>>>>
>>>>>
>>>>> On Wed, Oct 7, 2020 at 11:03 PM Alexander Semeshchenko <
>>>>> as77...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>> <https://stackoverflow.com/posts/64252040/timeline>
>>>>>>
>>>>>> Installing (download & tar zxf) Apache Flink 1.11.1 and running: 
>>>>>> ./bin/flink
>>>>>> run examples/streaming/WordCount.jar it show on the nice message
>>>>>> after more less 5 min. the trying of submitting:  Caused by:
>>>>>> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
>>>>>> Could not allocate the required slot within slot request timeout. Please
>>>>>> make sure that the cluster has enough resources. at
>>>>>> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
>>>>>> ... 45 more Caused by: java.util.concurrent.CompletionException:
>>>>>> java.util.concurrent.TimeoutException at
>>>>>> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>>>>>> at
>>>>>> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
>>>>>> at
>>>>>> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
>>>>>> at
>>>>>> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>>>>>>
>>>>>> It's Flink default configuration.
>>>>>>
>>>>>> Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order:
>>>>>> Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1
>>>>>> Core(s) per socket: 1
>>>>>> free -g total used free shared buff/cache available
>>>>>>
>>>>>> Mem: 62 1 23 3 37 57 Swap: 7 0 7
>>>>>>
>>>>>> are there some advices about what is happened?
>>>>>>
>>>>>

Reply via email to