Skip to content

CDC Inbound Endpoint Reference

The following configurations allow you to configure the CDC Inbound Endpoint for your scenario.

Parameter Description Required Default Value
interval The polling interval for the inbound endpoint to execute each cycle. This value is set in milliseconds. Yes -
coordination This optional property is only applicable in a cluster environment. In a clustered environment, an inbound endpoint will be executed in all worker nodes. If set to true in a cluster setup, this will run the inbound only in a single worker node. Once the running worker is down, the inbound starts on another available worker in the cluster. By default, coordination is enabled. Yes true
sequential Whether the messages need to be polled and injected sequentially or not. By default, this is set to true. Yes true
snapshot.mode Specifies the criteria for running a snapshot when the connector starts. Possible values are: always, initial, initial_only, schema_only, no_data, recovery, when_needed, configuration_based, and custom. Yes initial
connector.class The name of the Java class for the connector.
Example:
For MySQL database, io.debezium.connector.mysql.MySqlConnector
For PostgreSQL database, io.debezium.connector.postgresql.PostgresConnector
For Oracle database, io.debezium.connector.oracle.OracleConnector
For Db2 database, io.debezium.connector.db2.Db2Connector
Yes -
database.hostname IP address or hostname of the database server Yes -
database.port Port number (Integer) of the database server Yes -
database.user Name of the database user to use when connecting to the database server. Yes -
database.password The password to connect to the database.
Example: your_password or {wso2:vault-lookup(password_alias')}
Yes -
database.dbname The name of the database that needs to be listened to.
*This is applicable only for MySQL, Postgres, Oracle, and Db2
Yes -
connector.name Unique name for the connector. If not present, the inbound endpoint name is considered as the connector name. No -
topic.prefix Topic prefix that provides a namespace for the database server that you want Debezium to capture. The prefix should be unique across all other connectors since it is used as the prefix for all Kafka topic names that receive records from this connector. Only alphanumeric characters, hyphens, dots, and underscores must be used in the database server logical name. No -
schema.history.internal The name of the Java class that is responsible for the persistence of the database schema history. It must implement io.debezium.relational.history.SchemaHistory interface.
Refer Database schema history properties documentation and Debezium source configuration documentation for more information.
No io.debezium.storage.file.history.FileSchemaHistory
schema.history.internal.file.filename This value is required only if io.debezium.storage.file.history.FileSchemaHistory was provided for the schema.history.internal value. You need to specify the path to a file where the database schema history is stored. By default, the file will be stored in the <MI_HOME>/cdc/schemaHistory directory. No -
schema.history.internal.kafka.topic The Kafka topic where the database schema history is stored. Required when schema.history.internal is set to the io.debezium.storage.kafka.history.KafkaSchemaHistory. No -
schema.history.internal.kafka.bootstrap.servers The initial list of Kafka cluster servers to connect to. The cluster provides the topic to store the database schema history. Required when schema.history.internal is set to the io.debezium.storage.kafka.history.KafkaSchemaHistory. No -
offset.storage The name of the Java class that is responsible for the persistence of connector offsets. It must implement org.apache.kafka.connect.storage.OffsetBackingStore interface. No org.apache.kafka.connect.storage.FileOffsetBackingStore
offset.storage.file.filename Path to file where offsets are to be stored. Required when offset.storage is set to the org.apache.kafka.connect.storage.FileOffsetBackingStore. By default, the file will be stored in the <MI_HOME>/cdc/offsetStorage directory. No -
offset.storage.topic The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the org.apache.kafka.connect.storage.KafkaOffsetBackingStore. No -
offset.storage.partitions The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the org.apache.kafka.connect.storage.KafkaOffsetBackingStore. No -
offset.storage.replication.factor Replication factor used when creating the offset storage topic. Required when offset.storage is set to the org.apache.kafka.connect.storage.KafkaOffsetBackingStore. No -
database.instance Specifies the instance name of the SQL Server named instance.
*This is applicable only for SQL Server
No -
database.names The comma-separated list of the SQL Server database names from which to stream the changes.
*This is applicable only for SQL Server
No -
database.server.id A numeric ID of this database client, which must be unique across all currently running database processes in the cluster.
*This is applicable only for MySQL and MariaDB
No -
table.include.list The list of tables from the selected database that the changes for them need to be captured.
Example: inventory.products
No -
allowed.operations Operations that the user needs to listen to in the specified database tables. Should provide comma-separated values for create/update/delete/truncate.
Example: create, update, delete
By default, truncate operations are skipped.
No -
database.out.server.name Name of the XStream outbound server configured in the database.
*Only applicable if you are using Oracle database.
No -

For more custom configurations, please refer to the Debezium documentation.