Yes, use the same SQL and change '0's to '-1'. We received "Caused by:
java.lang.IllegalArgumentException: Could not parse value '-1' for key
'sink.bulk-flush.max-size'."

On Fri, Jan 15, 2021 at 6:04 AM Dawid Wysakowicz <dwysakow...@apache.org>
wrote:

> Hi Rex,
>
> As I said in my previous email the documentation for
> sink.bulk-flush.max-actions is wrong. You should be able to disable it with
> -1. I've just checked it on the 1.11.2 tag and it seems to be working just
> fine with:
>
> CREATE TABLE esTable (
>
>     a BIGINT NOT NULL,
>     b TIME,
>     c STRING NOT NULL,
>     d FLOAT,
>     e TINYINT NOT NULL,
>     f DATE,
>     g TIMESTAMP NOT NULL,
>     h as a + 2,
>     PRIMARY KEY (a, g) NOT ENFORCED
> )
> WITH (
>     'connector'='elasticsearch-6',
>     'index'='table-api',
>     'document-type'='MyType',
>     'hosts'='http://127.0.0.1:9200',
>     'sink.flush-on-checkpoint'='false',
>     'sink.bulk-flush.max-actions'='-1',
>     'sink.bulk-flush.max-size'='0'
> )
>
> If it still does not work for you with -1 could you share an example how I
> can reproduce the problem.
>
> Best,
>
> Dawid
> On 14/01/2021 18:08, Rex Fenley wrote:
>
> Flink 1.11.2
>
> CREATE TABLE sink_es (
> ...
> ) WITH (
> 'connector' = 'elasticsearch-7',
> 'hosts' = '${sys:proxyEnv.ELASTICSEARCH_HOSTS}',
> 'index' = '${sys:graph.flink.index_name}',
> 'format' = 'json',
> 'sink.bulk-flush.max-actions' = '0',
> 'sink.bulk-flush.max-size' = '0',
> 'sink.bulk-flush.interval' = '1s',
> 'sink.bulk-flush.backoff.delay' = '1s',
> 'sink.bulk-flush.backoff.max-retries' = '4',
> 'sink.bulk-flush.backoff.strategy' = 'CONSTANT'
> )
>
> On Thu, Jan 14, 2021 at 4:16 AM Dawid Wysakowicz <dwysakow...@apache.org>
> wrote:
>
>> Hi,
>>
>> First of all, what Flink versions are you using?
>>
>> You are right it is a mistake in the documentation of the
>> sink.bulk-flush.max-actions. It should say: Can be set to '-1' to
>> disable it. I created a ticket[1] to track that. And as far as I can tell
>> and I quickly checked that it should work. As for the
>> sink.bulk-flush.max-size you should be able to disable it with a value of
>> '0'.
>>
>> Could you share with us how do you use the connector? Could you also
>> share the full stack trace for the exception you're getting? Are you
>> creating the table with a CREATE statement?
>>
>> Best,
>>
>> Dawid
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-20979
>> On 13/01/2021 20:10, Rex Fenley wrote:
>>
>> Hello,
>>
>> It doesn't seem like we can disable max actions and max size for
>> Elasticsearch connector.
>>
>> Docs:
>>
>> https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/elasticsearch.html#sink-bulk-flush-max-actions
>> sink.bulk-flush.max-actions optional 1000 Integer Maximum number of
>> buffered actions per bulk request. Can be set to '0' to disable it.
>> sink.bulk-flush.max-size optional 2mb MemorySize Maximum size in memory
>> of buffered actions per bulk request. Must be in MB granularity. Can be set
>> to '0' to disable it.
>> Reality:
>>
>> org.apache.flink.client.program.ProgramInvocationException: The main
>> method caused an error: Max number of buffered actions must be larger than
>> 0.
>>
>> ES code looks like -1 is actually the value for disabling, but when I use
>> -1:
>> Caused by: java.lang.IllegalArgumentException: Could not parse value '-1'
>> for key 'sink.bulk-flush.max-size'.
>>
>> How can I disable these two settings?
>>
>> Thanks!
>>
>> --
>>
>> Rex Fenley  |  Software Engineer - Mobile and Backend
>>
>>
>> Remind.com <https://www.remind.com/> |  BLOG <http://blog.remind.com/>
>>  |  FOLLOW US <https://twitter.com/remindhq>  |  LIKE US
>> <https://www.facebook.com/remindhq>
>>
>>
>
> --
>
> Rex Fenley  |  Software Engineer - Mobile and Backend
>
>
> Remind.com <https://www.remind.com/> |  BLOG <http://blog.remind.com/>  |
>  FOLLOW US <https://twitter.com/remindhq>  |  LIKE US
> <https://www.facebook.com/remindhq>
>
>

-- 

Rex Fenley  |  Software Engineer - Mobile and Backend


Remind.com <https://www.remind.com/> |  BLOG <http://blog.remind.com/>
 |  FOLLOW
US <https://twitter.com/remindhq>  |  LIKE US
<https://www.facebook.com/remindhq>

Reply via email to