This is an automated email from the ASF dual-hosted git repository.
acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git
The following commit(s) were added to refs/heads/master by this push:
new b5099ad Regen website docs
b5099ad is described below
commit b5099adb342e6fff2c94edca1786369b0f2c26c2
Author: Andrea Cosentino <[email protected]>
AuthorDate: Wed Jul 1 06:34:43 2020 +0200
Regen website docs
---
.../ROOT/pages/debezium-mongodb-component.adoc | 16 ++++++++++++++--
.../ROOT/pages/debezium-mysql-component.adoc | 20 ++++++++++++--------
.../ROOT/pages/debezium-postgres-component.adoc | 22 ++++++++++++++++++----
.../ROOT/pages/debezium-sqlserver-component.adoc | 22 ++++++++++++++++++----
.../modules/ROOT/pages/jpa-component.adoc | 4 ++--
.../modules/ROOT/pages/jt400-component.adoc | 5 +++--
6 files changed, 67 insertions(+), 22 deletions(-)
diff --git a/docs/components/modules/ROOT/pages/debezium-mongodb-component.adoc
b/docs/components/modules/ROOT/pages/debezium-mongodb-component.adoc
index 58536ba..fd93a73 100644
--- a/docs/components/modules/ROOT/pages/debezium-mongodb-component.adoc
+++ b/docs/components/modules/ROOT/pages/debezium-mongodb-component.adoc
@@ -48,7 +48,7 @@ debezium-mongodb:name[?options]
// component options: START
-The Debezium MongoDB Connector component supports 43 options, which are listed
below.
+The Debezium MongoDB Connector component supports 49 options, which are listed
below.
@@ -74,9 +74,11 @@ The Debezium MongoDB Connector component supports 43
options, which are listed b
| *connectBackoffInitialDelayMs* (mongodb) | The initial delay when trying to
reconnect to a primary after a connection cannot be made or when no primary is
available. Defaults to 1 second (1000 ms). | 1s | long
| *connectBackoffMaxDelayMs* (mongodb) | The maximum delay when trying to
reconnect to a primary after a connection cannot be made or when no primary is
available. Defaults to 120 second (120,000 ms). | 2m | long
| *connectMaxAttempts* (mongodb) | Maximum number of failed connection
attempts to a replica set primary before an exception occurs and task is
aborted. Defaults to 16, which with the defaults for
'connect.backoff.initial.delay.ms' and 'connect.backoff.max.delay.ms' results
in just over 20 minutes of attempts before failing. | 16 | int
+| *converters* (mongodb) | Optional list of custom converters that would be
used instead of default ones. The converters are defined using '.type' config
option and configured using options '.' | | String
| *databaseBlacklist* (mongodb) | The databases for which changes are to be
excluded | | String
| *databaseHistoryFileFilename* (mongodb) | The path to the file that will be
used to record the database history | | String
| *databaseWhitelist* (mongodb) | The databases for which changes are to be
captured | | String
+| *eventProcessingFailureHandling Mode* (mongodb) | Specify how failures
during processing of events (i.e. when encountering a corrupted event) should
be handled, including:'fail' (the default) an exception indicating the
problematic event and its position is raised, causing the connector to be
stopped; 'warn' the problematic event and its position will be logged and the
event will be skipped;'ignore' the problematic event will be skipped. | fail |
String
| *fieldBlacklist* (mongodb) | Description is not available here, please check
Debezium website for corresponding key 'field.blacklist' description. | |
String
| *fieldRenames* (mongodb) | Description is not available here, please check
Debezium website for corresponding key 'field.renames' description. | | String
| *heartbeatIntervalMs* (mongodb) | Length of an interval in milli-seconds in
in which the connector periodically sends heartbeat messages to a heartbeat
topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int
@@ -84,14 +86,18 @@ The Debezium MongoDB Connector component supports 43
options, which are listed b
| *initialSyncMaxThreads* (mongodb) | Maximum number of threads used to
perform an initial sync of the collections in a replica set. Defaults to 1. | 1
| int
| *maxBatchSize* (mongodb) | Maximum size of each batch of source records.
Defaults to 2048. | 2048 | int
| *maxQueueSize* (mongodb) | Maximum size of the queue for change events read
from the database log but not yet recorded or forwarded. Defaults to 8192, and
should always be larger than the maximum batch size. | 8192 | int
+| *mongodbAuthsource* (mongodb) | Database containing user credentials. |
admin | String
| *mongodbHosts* (mongodb) | The hostname and port pairs (in the form 'host'
or 'host:port') of the MongoDB server(s) in the replica set. | | String
| *mongodbMembersAutoDiscover* (mongodb) | Specifies whether the addresses in
'hosts' are seeds that should be used to discover all members of the cluster or
replica set ('true'), or whether the address(es) in 'hosts' should be used as
is ('false'). The default is 'true'. | true | boolean
| *mongodbName* (mongodb) | *Required* Unique name that identifies the MongoDB
replica set or cluster and all recorded offsets, andthat is used as a prefix
for all schemas and topics. Each distinct MongoDB installation should have a
separate namespace and monitored by at most one Debezium connector. | | String
| *mongodbPassword* (mongodb) | *Required* Password to be used when connecting
to MongoDB, if necessary. | | String
+| *mongodbPollIntervalSec* (mongodb) | Frequency in seconds to look for new,
removed, or changed replica sets. Defaults to 30 seconds. | 30 | int
| *mongodbSslEnabled* (mongodb) | Should connector use SSL to connect to
MongoDB instances | false | boolean
| *mongodbSslInvalidHostname Allowed* (mongodb) | Whether invalid host names
are allowed when using SSL. If true the connection will not prevent
man-in-the-middle attacks | false | boolean
| *mongodbUser* (mongodb) | Database user for connecting to MongoDB, if
necessary. | | String
| *pollIntervalMs* (mongodb) | Frequency in milliseconds to wait for new
change events to appear after receiving no events. Defaults to 500ms. | 500ms |
long
+| *provideTransactionMetadata* (mongodb) | Enables transaction metadata
extraction together with event counting | false | boolean
+| *sanitizeFieldNames* (mongodb) | Whether field names will be sanitized to
Avro naming conventions | false | boolean
| *skippedOperations* (mongodb) | The comma-separated list of operations to
skip during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for
deletes. By default, no operations will be skipped. | | String
| *snapshotDelayMs* (mongodb) | The number of milliseconds to delay before a
snapshot will begin. | 0ms | long
| *snapshotFetchSize* (mongodb) | The maximum number of records that should be
loaded into memory while performing a snapshot | | int
@@ -121,7 +127,7 @@ with the following path and query parameters:
|===
-=== Query Parameters (45 parameters):
+=== Query Parameters (51 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
@@ -148,9 +154,11 @@ with the following path and query parameters:
| *connectBackoffInitialDelayMs* (mongodb) | The initial delay when trying to
reconnect to a primary after a connection cannot be made or when no primary is
available. Defaults to 1 second (1000 ms). | 1s | long
| *connectBackoffMaxDelayMs* (mongodb) | The maximum delay when trying to
reconnect to a primary after a connection cannot be made or when no primary is
available. Defaults to 120 second (120,000 ms). | 2m | long
| *connectMaxAttempts* (mongodb) | Maximum number of failed connection
attempts to a replica set primary before an exception occurs and task is
aborted. Defaults to 16, which with the defaults for
'connect.backoff.initial.delay.ms' and 'connect.backoff.max.delay.ms' results
in just over 20 minutes of attempts before failing. | 16 | int
+| *converters* (mongodb) | Optional list of custom converters that would be
used instead of default ones. The converters are defined using '.type' config
option and configured using options '.' | | String
| *databaseBlacklist* (mongodb) | The databases for which changes are to be
excluded | | String
| *databaseHistoryFileFilename* (mongodb) | The path to the file that will be
used to record the database history | | String
| *databaseWhitelist* (mongodb) | The databases for which changes are to be
captured | | String
+| *eventProcessingFailureHandling Mode* (mongodb) | Specify how failures
during processing of events (i.e. when encountering a corrupted event) should
be handled, including:'fail' (the default) an exception indicating the
problematic event and its position is raised, causing the connector to be
stopped; 'warn' the problematic event and its position will be logged and the
event will be skipped;'ignore' the problematic event will be skipped. | fail |
String
| *fieldBlacklist* (mongodb) | Description is not available here, please check
Debezium website for corresponding key 'field.blacklist' description. | |
String
| *fieldRenames* (mongodb) | Description is not available here, please check
Debezium website for corresponding key 'field.renames' description. | | String
| *heartbeatIntervalMs* (mongodb) | Length of an interval in milli-seconds in
in which the connector periodically sends heartbeat messages to a heartbeat
topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int
@@ -158,14 +166,18 @@ with the following path and query parameters:
| *initialSyncMaxThreads* (mongodb) | Maximum number of threads used to
perform an initial sync of the collections in a replica set. Defaults to 1. | 1
| int
| *maxBatchSize* (mongodb) | Maximum size of each batch of source records.
Defaults to 2048. | 2048 | int
| *maxQueueSize* (mongodb) | Maximum size of the queue for change events read
from the database log but not yet recorded or forwarded. Defaults to 8192, and
should always be larger than the maximum batch size. | 8192 | int
+| *mongodbAuthsource* (mongodb) | Database containing user credentials. |
admin | String
| *mongodbHosts* (mongodb) | The hostname and port pairs (in the form 'host'
or 'host:port') of the MongoDB server(s) in the replica set. | | String
| *mongodbMembersAutoDiscover* (mongodb) | Specifies whether the addresses in
'hosts' are seeds that should be used to discover all members of the cluster or
replica set ('true'), or whether the address(es) in 'hosts' should be used as
is ('false'). The default is 'true'. | true | boolean
| *mongodbName* (mongodb) | *Required* Unique name that identifies the MongoDB
replica set or cluster and all recorded offsets, andthat is used as a prefix
for all schemas and topics. Each distinct MongoDB installation should have a
separate namespace and monitored by at most one Debezium connector. | | String
| *mongodbPassword* (mongodb) | *Required* Password to be used when connecting
to MongoDB, if necessary. | | String
+| *mongodbPollIntervalSec* (mongodb) | Frequency in seconds to look for new,
removed, or changed replica sets. Defaults to 30 seconds. | 30 | int
| *mongodbSslEnabled* (mongodb) | Should connector use SSL to connect to
MongoDB instances | false | boolean
| *mongodbSslInvalidHostname Allowed* (mongodb) | Whether invalid host names
are allowed when using SSL. If true the connection will not prevent
man-in-the-middle attacks | false | boolean
| *mongodbUser* (mongodb) | Database user for connecting to MongoDB, if
necessary. | | String
| *pollIntervalMs* (mongodb) | Frequency in milliseconds to wait for new
change events to appear after receiving no events. Defaults to 500ms. | 500ms |
long
+| *provideTransactionMetadata* (mongodb) | Enables transaction metadata
extraction together with event counting | false | boolean
+| *sanitizeFieldNames* (mongodb) | Whether field names will be sanitized to
Avro naming conventions | false | boolean
| *skippedOperations* (mongodb) | The comma-separated list of operations to
skip during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for
deletes. By default, no operations will be skipped. | | String
| *snapshotDelayMs* (mongodb) | The number of milliseconds to delay before a
snapshot will begin. | 0ms | long
| *snapshotFetchSize* (mongodb) | The maximum number of records that should be
loaded into memory while performing a snapshot | | int
diff --git a/docs/components/modules/ROOT/pages/debezium-mysql-component.adoc
b/docs/components/modules/ROOT/pages/debezium-mysql-component.adoc
index 9b161ec..027f3d8 100644
--- a/docs/components/modules/ROOT/pages/debezium-mysql-component.adoc
+++ b/docs/components/modules/ROOT/pages/debezium-mysql-component.adoc
@@ -55,7 +55,7 @@ debezium-mysql:name[?options]
// component options: START
-The Debezium MySQL Connector component supports 73 options, which are listed
below.
+The Debezium MySQL Connector component supports 75 options, which are listed
below.
@@ -77,8 +77,9 @@ The Debezium MySQL Connector component supports 73 options,
which are listed bel
| *offsetStorageTopic* (consumer) | The name of the Kafka topic where offsets
are to be stored. Required when offset.storage is set to the
KafkaOffsetBackingStore. | | String
| *basicPropertyBinding* (advanced) | Whether the component should use basic
property binding (Camel 2.x) or the newer property binding with additional
capabilities | false | boolean
| *bigintUnsignedHandlingMode* (mysql) | Specify how BIGINT UNSIGNED columns
should be represented in change events, including:'precise' uses
java.math.BigDecimal to represent values, which are encoded in the change
events using a binary representation and Kafka Connect's
'org.apache.kafka.connect.data.Decimal' type; 'long' (the default) represents
values using Java's 'long', which may not offer the precision but will be far
easier to use in consumers. | long | String
+| *binaryHandlingMode* (mysql) | Specify how binary (blob, binary, etc.)
columns should be represented in change events, including:'bytes' represents
binary data as byte array (default)'base64' represents binary data as
base64-encoded string'hex' represents binary data as hex-encoded (base16)
string | bytes | String
| *binlogBufferSize* (mysql) | The size of a look-ahead buffer used by the
binlog reader to decide whether the transaction in progress is going to be
committed or rolled back. Use 0 to disable look-ahead buffering. Defaults to 0
(i.e. buffering is disabled). | 0 | int
-| *columnBlacklist* (mysql) | Description is not available here, please check
Debezium website for corresponding key 'column.blacklist' description. | |
String
+| *columnBlacklist* (mysql) | Regular expressions matching columns to exclude
from change events | | String
| *connectKeepAlive* (mysql) | Whether a separate thread should be used to
ensure the connection is kept alive. | true | boolean
| *connectKeepAliveIntervalMs* (mysql) | Interval in milliseconds to wait for
connection checking if keep alive thread is used. | 1m | long
| *connectTimeoutMs* (mysql) | Maximum time in milliseconds to wait after
trying to connect to the database before timing out. | 30s | int
@@ -117,16 +118,17 @@ The Debezium MySQL Connector component supports 73
options, which are listed bel
| *heartbeatIntervalMs* (mysql) | Length of an interval in milli-seconds in in
which the connector periodically sends heartbeat messages to a heartbeat topic.
Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int
| *heartbeatTopicsPrefix* (mysql) | The prefix that is used to name heartbeat
topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String
| *includeQuery* (mysql) | Whether the connector should include the original
SQL query that generated the change event. Note: This option requires MySQL be
configured with the binlog_rows_query_log_events option set to ON. Query will
not be present for events generated from snapshot. WARNING: Enabling this
option may expose tables or fields explicitly blacklisted or masked by
including the original SQL statement in the change event. For this reason the
default value is 'false'. | false | [...]
-| *includeSchemaChanges* (mysql) | Whether the connector should publish
changes in the database schema to a Kafka topic with the same name as the
database server ID. Each schema change will be recorded using a key that
contains the database name and whose value includes the DDL statement(s).The
default is 'true'. This is independent of how the connector internally records
database history. | true | boolean
+| *includeSchemaChanges* (mysql) | Whether the connector should publish
changes in the database schema to a Kafka topic with the same name as the
database server ID. Each schema change will be recorded using a key that
contains the database name and whose value include logical description of the
new schema and optionally the DDL statement(s).The default is 'true'. This is
independent of how the connector internally records database history. | true |
boolean
| *inconsistentSchemaHandlingMode* (mysql) | Specify how binlog events that
belong to a table missing from internal schema representation (i.e. internal
representation is not consistent with database) should be handled,
including:'fail' (the default) an exception indicating the problematic event
and its binlog position is raised, causing the connector to be stopped; 'warn'
the problematic event and its binlog position will be logged and the event will
be skipped;'skip' the problematic ev [...]
| *maxBatchSize* (mysql) | Maximum size of each batch of source records.
Defaults to 2048. | 2048 | int
| *maxQueueSize* (mysql) | Maximum size of the queue for change events read
from the database log but not yet recorded or forwarded. Defaults to 8192, and
should always be larger than the maximum batch size. | 8192 | int
| *messageKeyColumns* (mysql) | A semicolon-separated list of expressions that
match fully-qualified tables and column(s) to be used as message key. Each
expression must match the pattern ':',where the table names could be defined as
(DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific
connector,and the key columns are a comma-separated list of columns
representing the custom key. For any table without an explicit key
configuration the table's primary key column(s) [...]
| *pollIntervalMs* (mysql) | Frequency in milliseconds to wait for new change
events to appear after receiving no events. Defaults to 500ms. | 500ms | long
+| *skippedOperations* (mysql) | The comma-separated list of operations to skip
during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for
deletes. By default, no operations will be skipped. | | String
| *snapshotDelayMs* (mysql) | The number of milliseconds to delay before a
snapshot will begin. | 0ms | long
| *snapshotFetchSize* (mysql) | The maximum number of records that should be
loaded into memory while performing a snapshot | | int
| *snapshotLockingMode* (mysql) | Controls how long the connector holds onto
the global read lock while it is performing a snapshot. The default is
'minimal', which means the connector holds the global read lock (and thus
prevents any updates) for just the initial portion of the snapshot while the
database schemas and other metadata are being read. The remaining work in a
snapshot involves selecting all rows from each table, and this can be done
using the snapshot process' REPEATABLE REA [...]
-| *snapshotMode* (mysql) | The criteria for running a snapshot upon startup of
the connector. Options include: 'when_needed' to specify that the connector run
a snapshot upon startup whenever it deems it necessary; 'initial' (the default)
to specify the connector can run a snapshot only when no offsets are available
for the logical server name; 'initial_only' same as 'initial' except the
connector should stop after completing the snapshot and before it would
normally read the binlog; and [...]
+| *snapshotMode* (mysql) | The criteria for running a snapshot upon startup of
the connector. Options include: 'when_needed' to specify that the connector run
a snapshot upon startup whenever it deems it necessary; 'schema_only' to only
take a snapshot of the schema (table structures) but no actual data; 'initial'
(the default) to specify the connector can run a snapshot only when no offsets
are available for the logical server name; 'initial_only' same as 'initial'
except the connector [...]
| *snapshotNewTables* (mysql) | BETA FEATURE: On connector restart, the
connector will check if there have been any new tables added to the
configuration, and snapshot them. There is presently only two options:'off':
Default behavior. Do not snapshot new tables.'parallel': The snapshot of the
new tables will occur in parallel to the continued binlog reading of the old
tables. When the snapshot completes, an independent binlog reader will begin
reading the events for the new tables until [...]
| *snapshotSelectStatement Overrides* (mysql) | This property contains a
comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or
(SCHEMA_NAME.TABLE_NAME), depending on thespecific connectors . Select
statements for the individual tables are specified in further configuration
properties, one for each table, identified by the id
'snapshot.select.statement.overrides.DB_NAME.TABLE_NAME' or
'snapshot.select.statement.overrides.SCHEMA_NAME.TABLE_NAME', respectively. The
value of [...]
| *sourceStructVersion* (mysql) | A version of the format of the publicly
visible source part in the message | v2 | String
@@ -158,7 +160,7 @@ with the following path and query parameters:
|===
-=== Query Parameters (75 parameters):
+=== Query Parameters (77 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
@@ -181,8 +183,9 @@ with the following path and query parameters:
| *basicPropertyBinding* (advanced) | Whether the endpoint should use basic
property binding (Camel 2.x) or the newer property binding with additional
capabilities | false | boolean
| *synchronous* (advanced) | Sets whether synchronous processing should be
strictly used, or Camel is allowed to use asynchronous processing (if
supported). | false | boolean
| *bigintUnsignedHandlingMode* (mysql) | Specify how BIGINT UNSIGNED columns
should be represented in change events, including:'precise' uses
java.math.BigDecimal to represent values, which are encoded in the change
events using a binary representation and Kafka Connect's
'org.apache.kafka.connect.data.Decimal' type; 'long' (the default) represents
values using Java's 'long', which may not offer the precision but will be far
easier to use in consumers. | long | String
+| *binaryHandlingMode* (mysql) | Specify how binary (blob, binary, etc.)
columns should be represented in change events, including:'bytes' represents
binary data as byte array (default)'base64' represents binary data as
base64-encoded string'hex' represents binary data as hex-encoded (base16)
string | bytes | String
| *binlogBufferSize* (mysql) | The size of a look-ahead buffer used by the
binlog reader to decide whether the transaction in progress is going to be
committed or rolled back. Use 0 to disable look-ahead buffering. Defaults to 0
(i.e. buffering is disabled). | 0 | int
-| *columnBlacklist* (mysql) | Description is not available here, please check
Debezium website for corresponding key 'column.blacklist' description. | |
String
+| *columnBlacklist* (mysql) | Regular expressions matching columns to exclude
from change events | | String
| *connectKeepAlive* (mysql) | Whether a separate thread should be used to
ensure the connection is kept alive. | true | boolean
| *connectKeepAliveIntervalMs* (mysql) | Interval in milliseconds to wait for
connection checking if keep alive thread is used. | 1m | long
| *connectTimeoutMs* (mysql) | Maximum time in milliseconds to wait after
trying to connect to the database before timing out. | 30s | int
@@ -221,16 +224,17 @@ with the following path and query parameters:
| *heartbeatIntervalMs* (mysql) | Length of an interval in milli-seconds in in
which the connector periodically sends heartbeat messages to a heartbeat topic.
Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int
| *heartbeatTopicsPrefix* (mysql) | The prefix that is used to name heartbeat
topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String
| *includeQuery* (mysql) | Whether the connector should include the original
SQL query that generated the change event. Note: This option requires MySQL be
configured with the binlog_rows_query_log_events option set to ON. Query will
not be present for events generated from snapshot. WARNING: Enabling this
option may expose tables or fields explicitly blacklisted or masked by
including the original SQL statement in the change event. For this reason the
default value is 'false'. | false | [...]
-| *includeSchemaChanges* (mysql) | Whether the connector should publish
changes in the database schema to a Kafka topic with the same name as the
database server ID. Each schema change will be recorded using a key that
contains the database name and whose value includes the DDL statement(s).The
default is 'true'. This is independent of how the connector internally records
database history. | true | boolean
+| *includeSchemaChanges* (mysql) | Whether the connector should publish
changes in the database schema to a Kafka topic with the same name as the
database server ID. Each schema change will be recorded using a key that
contains the database name and whose value include logical description of the
new schema and optionally the DDL statement(s).The default is 'true'. This is
independent of how the connector internally records database history. | true |
boolean
| *inconsistentSchemaHandlingMode* (mysql) | Specify how binlog events that
belong to a table missing from internal schema representation (i.e. internal
representation is not consistent with database) should be handled,
including:'fail' (the default) an exception indicating the problematic event
and its binlog position is raised, causing the connector to be stopped; 'warn'
the problematic event and its binlog position will be logged and the event will
be skipped;'skip' the problematic ev [...]
| *maxBatchSize* (mysql) | Maximum size of each batch of source records.
Defaults to 2048. | 2048 | int
| *maxQueueSize* (mysql) | Maximum size of the queue for change events read
from the database log but not yet recorded or forwarded. Defaults to 8192, and
should always be larger than the maximum batch size. | 8192 | int
| *messageKeyColumns* (mysql) | A semicolon-separated list of expressions that
match fully-qualified tables and column(s) to be used as message key. Each
expression must match the pattern ':',where the table names could be defined as
(DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific
connector,and the key columns are a comma-separated list of columns
representing the custom key. For any table without an explicit key
configuration the table's primary key column(s) [...]
| *pollIntervalMs* (mysql) | Frequency in milliseconds to wait for new change
events to appear after receiving no events. Defaults to 500ms. | 500ms | long
+| *skippedOperations* (mysql) | The comma-separated list of operations to skip
during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for
deletes. By default, no operations will be skipped. | | String
| *snapshotDelayMs* (mysql) | The number of milliseconds to delay before a
snapshot will begin. | 0ms | long
| *snapshotFetchSize* (mysql) | The maximum number of records that should be
loaded into memory while performing a snapshot | | int
| *snapshotLockingMode* (mysql) | Controls how long the connector holds onto
the global read lock while it is performing a snapshot. The default is
'minimal', which means the connector holds the global read lock (and thus
prevents any updates) for just the initial portion of the snapshot while the
database schemas and other metadata are being read. The remaining work in a
snapshot involves selecting all rows from each table, and this can be done
using the snapshot process' REPEATABLE REA [...]
-| *snapshotMode* (mysql) | The criteria for running a snapshot upon startup of
the connector. Options include: 'when_needed' to specify that the connector run
a snapshot upon startup whenever it deems it necessary; 'initial' (the default)
to specify the connector can run a snapshot only when no offsets are available
for the logical server name; 'initial_only' same as 'initial' except the
connector should stop after completing the snapshot and before it would
normally read the binlog; and [...]
+| *snapshotMode* (mysql) | The criteria for running a snapshot upon startup of
the connector. Options include: 'when_needed' to specify that the connector run
a snapshot upon startup whenever it deems it necessary; 'schema_only' to only
take a snapshot of the schema (table structures) but no actual data; 'initial'
(the default) to specify the connector can run a snapshot only when no offsets
are available for the logical server name; 'initial_only' same as 'initial'
except the connector [...]
| *snapshotNewTables* (mysql) | BETA FEATURE: On connector restart, the
connector will check if there have been any new tables added to the
configuration, and snapshot them. There is presently only two options:'off':
Default behavior. Do not snapshot new tables.'parallel': The snapshot of the
new tables will occur in parallel to the continued binlog reading of the old
tables. When the snapshot completes, an independent binlog reader will begin
reading the events for the new tables until [...]
| *snapshotSelectStatement Overrides* (mysql) | This property contains a
comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or
(SCHEMA_NAME.TABLE_NAME), depending on thespecific connectors . Select
statements for the individual tables are specified in further configuration
properties, one for each table, identified by the id
'snapshot.select.statement.overrides.DB_NAME.TABLE_NAME' or
'snapshot.select.statement.overrides.SCHEMA_NAME.TABLE_NAME', respectively. The
value of [...]
| *sourceStructVersion* (mysql) | A version of the format of the publicly
visible source part in the message | v2 | String
diff --git
a/docs/components/modules/ROOT/pages/debezium-postgres-component.adoc
b/docs/components/modules/ROOT/pages/debezium-postgres-component.adoc
index dc00be7..10e867e 100644
--- a/docs/components/modules/ROOT/pages/debezium-postgres-component.adoc
+++ b/docs/components/modules/ROOT/pages/debezium-postgres-component.adoc
@@ -46,7 +46,7 @@ debezium-postgres:name[?options]
// component options: START
-The Debezium PostgresSQL Connector component supports 67 options, which are
listed below.
+The Debezium PostgresSQL Connector component supports 74 options, which are
listed below.
@@ -67,7 +67,10 @@ The Debezium PostgresSQL Connector component supports 67
options, which are list
| *offsetStorageReplicationFactor* (consumer) | Replication factor used when
creating the offset storage topic. Required when offset.storage is set to the
KafkaOffsetBackingStore | | int
| *offsetStorageTopic* (consumer) | The name of the Kafka topic where offsets
are to be stored. Required when offset.storage is set to the
KafkaOffsetBackingStore. | | String
| *basicPropertyBinding* (advanced) | Whether the component should use basic
property binding (Camel 2.x) or the newer property binding with additional
capabilities | false | boolean
-| *columnBlacklist* (postgres) | Description is not available here, please
check Debezium website for corresponding key 'column.blacklist' description. |
| String
+| *binaryHandlingMode* (postgres) | Specify how binary (blob, binary, etc.)
columns should be represented in change events, including:'bytes' represents
binary data as byte array (default)'base64' represents binary data as
base64-encoded string'hex' represents binary data as hex-encoded (base16)
string | bytes | String
+| *columnBlacklist* (postgres) | Regular expressions matching columns to
exclude from change events | | String
+| *columnWhitelist* (postgres) | Regular expressions matching columns to
include in change events | | String
+| *converters* (postgres) | Optional list of custom converters that would be
used instead of default ones. The converters are defined using '.type' config
option and configured using options '.' | | String
| *databaseDbname* (postgres) | The name of the database the connector should
be monitoring | | String
| *databaseHistoryFileFilename* (postgres) | The path to the file that will be
used to record the database history | | String
| *databaseHostname* (postgres) | Resolvable hostname or IP address of the
Postgres database server. | | String
@@ -97,10 +100,13 @@ The Debezium PostgresSQL Connector component supports 67
options, which are list
| *pluginName* (postgres) | The name of the Postgres logical decoding plugin
installed on the server. Supported values are 'decoderbufs' and 'wal2json'.
Defaults to 'decoderbufs'. | decoderbufs | String
| *pollIntervalMs* (postgres) | Frequency in milliseconds to wait for new
change events to appear after receiving no events. Defaults to 500ms. | 500ms |
long
| *provideTransactionMetadata* (postgres) | Enables transaction metadata
extraction together with event counting | false | boolean
+| *publicationAutocreateMode* (postgres) | Applies only when streaming changes
using pgoutput.Determine how creation of a publication should work, the default
is all_tables.DISABLED - The connector will not attempt to create a publication
at all. The expectation is that the user has created the publication up-front.
If the publication isn't found to exist upon startup, the connector will throw
an exception and stop.ALL_TABLES - If no publication exists, the connector will
create a new pu [...]
| *publicationName* (postgres) | The name of the Postgres 10 publication used
for streaming changes from a plugin.Defaults to 'dbz_publication' |
dbz_publication | String
+| *sanitizeFieldNames* (postgres) | Whether field names will be sanitized to
Avro naming conventions | false | boolean
| *schemaBlacklist* (postgres) | The schemas for which events must not be
captured | | String
| *schemaRefreshMode* (postgres) | Specify the conditions that trigger a
refresh of the in-memory schema for a table. 'columns_diff' (the default) is
the safest mode, ensuring the in-memory schema stays in-sync with the database
table's schema at all times. 'columns_diff_exclude_unchanged_toast' instructs
the connector to refresh the in-memory schema cache if there is a discrepancy
between it and the schema derived from the incoming message, unless unchanged
TOASTable data fully accounts [...]
| *schemaWhitelist* (postgres) | The schemas for which events should be
captured | | String
+| *skippedOperations* (postgres) | The comma-separated list of operations to
skip during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for
deletes. By default, no operations will be skipped. | | String
| *slotDropOnStop* (postgres) | Whether or not to drop the logical replication
slot when the connector finishes orderlyBy default the replication is kept so
that on restart progress can resume from the last recorded location | false |
boolean
| *slotMaxRetries* (postgres) | How many times to retry connecting to a
replication slot when an attempt fails. | 6 | int
| *slotName* (postgres) | The name of the Postgres logical decoding slot
created for streaming changes from a plugin.Defaults to 'debezium | debezium |
String
@@ -115,6 +121,7 @@ The Debezium PostgresSQL Connector component supports 67
options, which are list
| *sourceStructVersion* (postgres) | A version of the format of the publicly
visible source part in the message | v2 | String
| *statusUpdateIntervalMs* (postgres) | Frequency in milliseconds for sending
replication connection status updates to the server. Defaults to 10 seconds
(10000 ms). | 10s | int
| *tableBlacklist* (postgres) | Description is not available here, please
check Debezium website for corresponding key 'table.blacklist' description. |
| String
+| *tableIgnoreBuiltin* (postgres) | Flag specifying whether built-in tables
should be ignored. | true | boolean
| *tableWhitelist* (postgres) | The tables for which changes are to be
captured | | String
| *timePrecisionMode* (postgres) | Time, date, and timestamps can be
represented with different kinds of precisions, including:'adaptive' (the
default) bases the precision of time, date, and timestamp values on the
database column's precision; 'adaptive_time_microseconds' like 'adaptive' mode,
but TIME fields always use microseconds precision;'connect' always represents
time, date, and timestamp values using Kafka Connect's built-in representations
for Time, Date, and Timestamp, which us [...]
| *toastedValuePlaceholder* (postgres) | Specify the constant that will be
provided by Debezium to indicate that the original value is a toasted value not
provided by the database.If starts with 'hex:' prefix it is expected that the
rest of the string repesents hexadecimally encoded octets. |
__debezium_unavailable_value | String
@@ -143,7 +150,7 @@ with the following path and query parameters:
|===
-=== Query Parameters (69 parameters):
+=== Query Parameters (76 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
@@ -165,7 +172,10 @@ with the following path and query parameters:
| *exchangePattern* (consumer) | Sets the exchange pattern when the consumer
creates an exchange. The value can be one of: InOnly, InOut, InOptionalOut | |
ExchangePattern
| *basicPropertyBinding* (advanced) | Whether the endpoint should use basic
property binding (Camel 2.x) or the newer property binding with additional
capabilities | false | boolean
| *synchronous* (advanced) | Sets whether synchronous processing should be
strictly used, or Camel is allowed to use asynchronous processing (if
supported). | false | boolean
-| *columnBlacklist* (postgres) | Description is not available here, please
check Debezium website for corresponding key 'column.blacklist' description. |
| String
+| *binaryHandlingMode* (postgres) | Specify how binary (blob, binary, etc.)
columns should be represented in change events, including:'bytes' represents
binary data as byte array (default)'base64' represents binary data as
base64-encoded string'hex' represents binary data as hex-encoded (base16)
string | bytes | String
+| *columnBlacklist* (postgres) | Regular expressions matching columns to
exclude from change events | | String
+| *columnWhitelist* (postgres) | Regular expressions matching columns to
include in change events | | String
+| *converters* (postgres) | Optional list of custom converters that would be
used instead of default ones. The converters are defined using '.type' config
option and configured using options '.' | | String
| *databaseDbname* (postgres) | The name of the database the connector should
be monitoring | | String
| *databaseHistoryFileFilename* (postgres) | The path to the file that will be
used to record the database history | | String
| *databaseHostname* (postgres) | Resolvable hostname or IP address of the
Postgres database server. | | String
@@ -195,10 +205,13 @@ with the following path and query parameters:
| *pluginName* (postgres) | The name of the Postgres logical decoding plugin
installed on the server. Supported values are 'decoderbufs' and 'wal2json'.
Defaults to 'decoderbufs'. | decoderbufs | String
| *pollIntervalMs* (postgres) | Frequency in milliseconds to wait for new
change events to appear after receiving no events. Defaults to 500ms. | 500ms |
long
| *provideTransactionMetadata* (postgres) | Enables transaction metadata
extraction together with event counting | false | boolean
+| *publicationAutocreateMode* (postgres) | Applies only when streaming changes
using pgoutput.Determine how creation of a publication should work, the default
is all_tables.DISABLED - The connector will not attempt to create a publication
at all. The expectation is that the user has created the publication up-front.
If the publication isn't found to exist upon startup, the connector will throw
an exception and stop.ALL_TABLES - If no publication exists, the connector will
create a new pu [...]
| *publicationName* (postgres) | The name of the Postgres 10 publication used
for streaming changes from a plugin.Defaults to 'dbz_publication' |
dbz_publication | String
+| *sanitizeFieldNames* (postgres) | Whether field names will be sanitized to
Avro naming conventions | false | boolean
| *schemaBlacklist* (postgres) | The schemas for which events must not be
captured | | String
| *schemaRefreshMode* (postgres) | Specify the conditions that trigger a
refresh of the in-memory schema for a table. 'columns_diff' (the default) is
the safest mode, ensuring the in-memory schema stays in-sync with the database
table's schema at all times. 'columns_diff_exclude_unchanged_toast' instructs
the connector to refresh the in-memory schema cache if there is a discrepancy
between it and the schema derived from the incoming message, unless unchanged
TOASTable data fully accounts [...]
| *schemaWhitelist* (postgres) | The schemas for which events should be
captured | | String
+| *skippedOperations* (postgres) | The comma-separated list of operations to
skip during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for
deletes. By default, no operations will be skipped. | | String
| *slotDropOnStop* (postgres) | Whether or not to drop the logical replication
slot when the connector finishes orderlyBy default the replication is kept so
that on restart progress can resume from the last recorded location | false |
boolean
| *slotMaxRetries* (postgres) | How many times to retry connecting to a
replication slot when an attempt fails. | 6 | int
| *slotName* (postgres) | The name of the Postgres logical decoding slot
created for streaming changes from a plugin.Defaults to 'debezium | debezium |
String
@@ -213,6 +226,7 @@ with the following path and query parameters:
| *sourceStructVersion* (postgres) | A version of the format of the publicly
visible source part in the message | v2 | String
| *statusUpdateIntervalMs* (postgres) | Frequency in milliseconds for sending
replication connection status updates to the server. Defaults to 10 seconds
(10000 ms). | 10s | int
| *tableBlacklist* (postgres) | Description is not available here, please
check Debezium website for corresponding key 'table.blacklist' description. |
| String
+| *tableIgnoreBuiltin* (postgres) | Flag specifying whether built-in tables
should be ignored. | true | boolean
| *tableWhitelist* (postgres) | The tables for which changes are to be
captured | | String
| *timePrecisionMode* (postgres) | Time, date, and timestamps can be
represented with different kinds of precisions, including:'adaptive' (the
default) bases the precision of time, date, and timestamp values on the
database column's precision; 'adaptive_time_microseconds' like 'adaptive' mode,
but TIME fields always use microseconds precision;'connect' always represents
time, date, and timestamp values using Kafka Connect's built-in representations
for Time, Date, and Timestamp, which us [...]
| *toastedValuePlaceholder* (postgres) | Specify the constant that will be
provided by Debezium to indicate that the original value is a toasted value not
provided by the database.If starts with 'hex:' prefix it is expected that the
rest of the string repesents hexadecimally encoded octets. |
__debezium_unavailable_value | String
diff --git
a/docs/components/modules/ROOT/pages/debezium-sqlserver-component.adoc
b/docs/components/modules/ROOT/pages/debezium-sqlserver-component.adoc
index 5c0fc2b..8fb96e8 100644
--- a/docs/components/modules/ROOT/pages/debezium-sqlserver-component.adoc
+++ b/docs/components/modules/ROOT/pages/debezium-sqlserver-component.adoc
@@ -45,7 +45,7 @@ debezium-sqlserver:name[?options]
// component options: START
-The Debezium SQL Server Connector component supports 48 options, which are
listed below.
+The Debezium SQL Server Connector component supports 55 options, which are
listed below.
@@ -66,7 +66,9 @@ The Debezium SQL Server Connector component supports 48
options, which are liste
| *offsetStorageReplicationFactor* (consumer) | Replication factor used when
creating the offset storage topic. Required when offset.storage is set to the
KafkaOffsetBackingStore | | int
| *offsetStorageTopic* (consumer) | The name of the Kafka topic where offsets
are to be stored. Required when offset.storage is set to the
KafkaOffsetBackingStore. | | String
| *basicPropertyBinding* (advanced) | Whether the component should use basic
property binding (Camel 2.x) or the newer property binding with additional
capabilities | false | boolean
-| *columnBlacklist* (sqlserver) | Description is not available here, please
check Debezium website for corresponding key 'column.blacklist' description. |
| String
+| *columnBlacklist* (sqlserver) | Regular expressions matching columns to
exclude from change events | | String
+| *columnWhitelist* (sqlserver) | Regular expressions matching columns to
include in change events | | String
+| *converters* (sqlserver) | Optional list of custom converters that would be
used instead of default ones. The converters are defined using '.type' config
option and configured using options '.' | | String
| *databaseDbname* (sqlserver) | The name of the database the connector should
be monitoring. When working with a multi-tenant set-up, must be set to the CDB
name. | | String
| *databaseHistory* (sqlserver) | The name of the DatabaseHistory class that
should be used to store and recover database schema changes. The configuration
properties for the history are prefixed with the 'database.history.' string. |
io.debezium.relational.history.FileDatabaseHistory | String
| *databaseHistoryFileFilename* (sqlserver) | The path to the file that will
be used to record the database history | | String
@@ -84,17 +86,22 @@ The Debezium SQL Server Connector component supports 48
options, which are liste
| *eventProcessingFailureHandling Mode* (sqlserver) | Specify how failures
during processing of events (i.e. when encountering a corrupted event) should
be handled, including:'fail' (the default) an exception indicating the
problematic event and its position is raised, causing the connector to be
stopped; 'warn' the problematic event and its position will be logged and the
event will be skipped;'ignore' the problematic event will be skipped. | fail |
String
| *heartbeatIntervalMs* (sqlserver) | Length of an interval in milli-seconds
in in which the connector periodically sends heartbeat messages to a heartbeat
topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int
| *heartbeatTopicsPrefix* (sqlserver) | The prefix that is used to name
heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat |
String
+| *includeSchemaChanges* (sqlserver) | Whether the connector should publish
changes in the database schema to a Kafka topic with the same name as the
database server ID. Each schema change will be recorded using a key that
contains the database name and whose value include logical description of the
new schema and optionally the DDL statement(s).The default is 'true'. This is
independent of how the connector internally records database history. | true |
boolean
| *maxBatchSize* (sqlserver) | Maximum size of each batch of source records.
Defaults to 2048. | 2048 | int
| *maxQueueSize* (sqlserver) | Maximum size of the queue for change events
read from the database log but not yet recorded or forwarded. Defaults to 8192,
and should always be larger than the maximum batch size. | 8192 | int
| *messageKeyColumns* (sqlserver) | A semicolon-separated list of expressions
that match fully-qualified tables and column(s) to be used as message key. Each
expression must match the pattern ':',where the table names could be defined as
(DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific
connector,and the key columns are a comma-separated list of columns
representing the custom key. For any table without an explicit key
configuration the table's primary key colum [...]
| *pollIntervalMs* (sqlserver) | Frequency in milliseconds to wait for new
change events to appear after receiving no events. Defaults to 500ms. | 500ms |
long
| *provideTransactionMetadata* (sqlserver) | Enables transaction metadata
extraction together with event counting | false | boolean
+| *sanitizeFieldNames* (sqlserver) | Whether field names will be sanitized to
Avro naming conventions | false | boolean
+| *skippedOperations* (sqlserver) | The comma-separated list of operations to
skip during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for
deletes. By default, no operations will be skipped. | | String
| *snapshotDelayMs* (sqlserver) | The number of milliseconds to delay before a
snapshot will begin. | 0ms | long
| *snapshotFetchSize* (sqlserver) | The maximum number of records that should
be loaded into memory while performing a snapshot | | int
+| *snapshotIsolationMode* (sqlserver) | Controls which transaction isolation
level is used and how long the connector locks the monitored tables. The
default is 'repeatable_read', which means that repeatable read isolation level
is used. In addition, exclusive locks are taken only during schema snapshot.
Using a value of 'exclusive' ensures that the connector holds the exclusive
lock (and thus prevents any reads and updates) for all monitored tables during
the entire snapshot duration. W [...]
| *snapshotLockTimeoutMs* (sqlserver) | The maximum number of millis to wait
for table locks at the beginning of a snapshot. If locks cannot be acquired in
this time frame, the snapshot will be aborted. Defaults to 10 seconds | 10s |
long
| *snapshotMode* (sqlserver) | The criteria for running a snapshot upon
startup of the connector. Options include: 'initial' (the default) to specify
the connector should run a snapshot only when no offsets are available for the
logical server name; 'schema_only' to specify the connector should run a
snapshot of the schema when no offsets are available for the logical server
name. | initial | String
| *snapshotSelectStatement Overrides* (sqlserver) | This property contains a
comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or
(SCHEMA_NAME.TABLE_NAME), depending on thespecific connectors . Select
statements for the individual tables are specified in further configuration
properties, one for each table, identified by the id
'snapshot.select.statement.overrides.DB_NAME.TABLE_NAME' or
'snapshot.select.statement.overrides.SCHEMA_NAME.TABLE_NAME', respectively. The
valu [...]
| *sourceStructVersion* (sqlserver) | A version of the format of the publicly
visible source part in the message | v2 | String
+| *sourceTimestampMode* (sqlserver) | Configures the criteria of the attached
timestamp within the source record (ts_ms).Options include:'commit', (default)
the source timestamp is set to the instant where the record was committed in
the database'processing', the source timestamp is set to the instant where the
record was processed by Debezium. | commit | String
| *tableBlacklist* (sqlserver) | Description is not available here, please
check Debezium website for corresponding key 'table.blacklist' description. |
| String
| *tableIgnoreBuiltin* (sqlserver) | Flag specifying whether built-in tables
should be ignored. | true | boolean
| *tableWhitelist* (sqlserver) | The tables for which changes are to be
captured | | String
@@ -123,7 +130,7 @@ with the following path and query parameters:
|===
-=== Query Parameters (50 parameters):
+=== Query Parameters (57 parameters):
[width="100%",cols="2,5,^1,2",options="header"]
@@ -145,7 +152,9 @@ with the following path and query parameters:
| *exchangePattern* (consumer) | Sets the exchange pattern when the consumer
creates an exchange. The value can be one of: InOnly, InOut, InOptionalOut | |
ExchangePattern
| *basicPropertyBinding* (advanced) | Whether the endpoint should use basic
property binding (Camel 2.x) or the newer property binding with additional
capabilities | false | boolean
| *synchronous* (advanced) | Sets whether synchronous processing should be
strictly used, or Camel is allowed to use asynchronous processing (if
supported). | false | boolean
-| *columnBlacklist* (sqlserver) | Description is not available here, please
check Debezium website for corresponding key 'column.blacklist' description. |
| String
+| *columnBlacklist* (sqlserver) | Regular expressions matching columns to
exclude from change events | | String
+| *columnWhitelist* (sqlserver) | Regular expressions matching columns to
include in change events | | String
+| *converters* (sqlserver) | Optional list of custom converters that would be
used instead of default ones. The converters are defined using '.type' config
option and configured using options '.' | | String
| *databaseDbname* (sqlserver) | The name of the database the connector should
be monitoring. When working with a multi-tenant set-up, must be set to the CDB
name. | | String
| *databaseHistory* (sqlserver) | The name of the DatabaseHistory class that
should be used to store and recover database schema changes. The configuration
properties for the history are prefixed with the 'database.history.' string. |
io.debezium.relational.history.FileDatabaseHistory | String
| *databaseHistoryFileFilename* (sqlserver) | The path to the file that will
be used to record the database history | | String
@@ -163,17 +172,22 @@ with the following path and query parameters:
| *eventProcessingFailureHandling Mode* (sqlserver) | Specify how failures
during processing of events (i.e. when encountering a corrupted event) should
be handled, including:'fail' (the default) an exception indicating the
problematic event and its position is raised, causing the connector to be
stopped; 'warn' the problematic event and its position will be logged and the
event will be skipped;'ignore' the problematic event will be skipped. | fail |
String
| *heartbeatIntervalMs* (sqlserver) | Length of an interval in milli-seconds
in in which the connector periodically sends heartbeat messages to a heartbeat
topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int
| *heartbeatTopicsPrefix* (sqlserver) | The prefix that is used to name
heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat |
String
+| *includeSchemaChanges* (sqlserver) | Whether the connector should publish
changes in the database schema to a Kafka topic with the same name as the
database server ID. Each schema change will be recorded using a key that
contains the database name and whose value include logical description of the
new schema and optionally the DDL statement(s).The default is 'true'. This is
independent of how the connector internally records database history. | true |
boolean
| *maxBatchSize* (sqlserver) | Maximum size of each batch of source records.
Defaults to 2048. | 2048 | int
| *maxQueueSize* (sqlserver) | Maximum size of the queue for change events
read from the database log but not yet recorded or forwarded. Defaults to 8192,
and should always be larger than the maximum batch size. | 8192 | int
| *messageKeyColumns* (sqlserver) | A semicolon-separated list of expressions
that match fully-qualified tables and column(s) to be used as message key. Each
expression must match the pattern ':',where the table names could be defined as
(DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific
connector,and the key columns are a comma-separated list of columns
representing the custom key. For any table without an explicit key
configuration the table's primary key colum [...]
| *pollIntervalMs* (sqlserver) | Frequency in milliseconds to wait for new
change events to appear after receiving no events. Defaults to 500ms. | 500ms |
long
| *provideTransactionMetadata* (sqlserver) | Enables transaction metadata
extraction together with event counting | false | boolean
+| *sanitizeFieldNames* (sqlserver) | Whether field names will be sanitized to
Avro naming conventions | false | boolean
+| *skippedOperations* (sqlserver) | The comma-separated list of operations to
skip during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for
deletes. By default, no operations will be skipped. | | String
| *snapshotDelayMs* (sqlserver) | The number of milliseconds to delay before a
snapshot will begin. | 0ms | long
| *snapshotFetchSize* (sqlserver) | The maximum number of records that should
be loaded into memory while performing a snapshot | | int
+| *snapshotIsolationMode* (sqlserver) | Controls which transaction isolation
level is used and how long the connector locks the monitored tables. The
default is 'repeatable_read', which means that repeatable read isolation level
is used. In addition, exclusive locks are taken only during schema snapshot.
Using a value of 'exclusive' ensures that the connector holds the exclusive
lock (and thus prevents any reads and updates) for all monitored tables during
the entire snapshot duration. W [...]
| *snapshotLockTimeoutMs* (sqlserver) | The maximum number of millis to wait
for table locks at the beginning of a snapshot. If locks cannot be acquired in
this time frame, the snapshot will be aborted. Defaults to 10 seconds | 10s |
long
| *snapshotMode* (sqlserver) | The criteria for running a snapshot upon
startup of the connector. Options include: 'initial' (the default) to specify
the connector should run a snapshot only when no offsets are available for the
logical server name; 'schema_only' to specify the connector should run a
snapshot of the schema when no offsets are available for the logical server
name. | initial | String
| *snapshotSelectStatement Overrides* (sqlserver) | This property contains a
comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or
(SCHEMA_NAME.TABLE_NAME), depending on thespecific connectors . Select
statements for the individual tables are specified in further configuration
properties, one for each table, identified by the id
'snapshot.select.statement.overrides.DB_NAME.TABLE_NAME' or
'snapshot.select.statement.overrides.SCHEMA_NAME.TABLE_NAME', respectively. The
valu [...]
| *sourceStructVersion* (sqlserver) | A version of the format of the publicly
visible source part in the message | v2 | String
+| *sourceTimestampMode* (sqlserver) | Configures the criteria of the attached
timestamp within the source record (ts_ms).Options include:'commit', (default)
the source timestamp is set to the instant where the record was committed in
the database'processing', the source timestamp is set to the instant where the
record was processed by Debezium. | commit | String
| *tableBlacklist* (sqlserver) | Description is not available here, please
check Debezium website for corresponding key 'table.blacklist' description. |
| String
| *tableIgnoreBuiltin* (sqlserver) | Flag specifying whether built-in tables
should be ignored. | true | boolean
| *tableWhitelist* (sqlserver) | The tables for which changes are to be
captured | | String
diff --git a/docs/components/modules/ROOT/pages/jpa-component.adoc
b/docs/components/modules/ROOT/pages/jpa-component.adoc
index d291b45..1e1ebfb 100644
--- a/docs/components/modules/ROOT/pages/jpa-component.adoc
+++ b/docs/components/modules/ROOT/pages/jpa-component.adoc
@@ -428,8 +428,8 @@ but the following listed types were not enhanced at build
time or at class load
at
org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127)
at
org.apache.camel.processor.jpa.JpaRouteTest.cleanupRepository(JpaRouteTest.java:96)
at
org.apache.camel.processor.jpa.JpaRouteTest.createCamelContext(JpaRouteTest.java:67)
- at
org.apache.camel.test.junit4.CamelTestSupport.doSetUp(CamelTestSupport.java:238)
- at
org.apache.camel.test.junit4.CamelTestSupport.setUp(CamelTestSupport.java:208)
+ at
org.apache.camel.test.junit5.CamelTestSupport.doSetUp(CamelTestSupport.java:238)
+ at
org.apache.camel.test.junit5.CamelTestSupport.setUp(CamelTestSupport.java:208)
--------------------------------------------------------------------------------------------------------------------------------------------------------
The problem here is that the source has been compiled or recompiled through
diff --git a/docs/components/modules/ROOT/pages/jt400-component.adoc
b/docs/components/modules/ROOT/pages/jt400-component.adoc
index f9ab5e7..08d7007 100644
--- a/docs/components/modules/ROOT/pages/jt400-component.adoc
+++ b/docs/components/modules/ROOT/pages/jt400-component.adoc
@@ -13,8 +13,9 @@
*{component-header}*
-The JT400 component allows you to exchanges messages with an IBM i
-system using data queues.
+The JT400 component allows you to exchange messages with an IBM i system
+using data queues, message queues, or program call. IBM i is the
+replacement for AS/400 and iSeries servers.
Maven users will need to add the following dependency to their `pom.xml`
for this component: