Confluent Cloud source plugin
The Confluent Cloud source plugin
(name: kafka
, alias: confluent_cloud
) lets you retrieve data from Confluent
Cloud and ingest it into a telemetry pipeline.
This is a pull-based source plugin.
Supported telemetry types
This plugin for Chronosphere Telemetry Pipeline supports these telemetry types:
Logs | Metrics | Traces |
---|---|---|
Configuration parameters
Use the parameters in this section to configure your plugin. The Telemetry Pipeline web interface uses the values in the Name column to describe the parameters. Items in the Key column are the YAML keys to use in pipeline configuration files.
General
Name | Key | Description | Default |
---|---|---|---|
Confluent Cloud Bootstrap Servers | brokers | Required. The Confluent Cloud bootstrap found within the configuration settings. | [YOUR_BOOTSTRAP_SERVER].confluent.cloud:9092 |
Confluent Cloud Topic | topics | Required. The Confluent Cloud topic to read information from. | none |
Confluent Cloud API Key | rdkafka.sasl.username | Your Confluent Cloud API key. | none |
Confluent Cloud API Secret | rdkafka.sasl.password | Required. Your Confluent Cloud API secret. | none |
Advanced
Name | Key | Description | Default |
---|---|---|---|
Minimum Queued Messages | rdkafka.queued.min.messages | The minimum number of messages per topic and partition that Telemetry Pipeline tries to maintain in the local consumer queue. | 10 |
Session Timeout (ms) | rdkafka.session.timeout.ms | How long Telemetry Pipeline waits before terminating a session connection. | 45000 |
Security Protocol | rdkafka.security.protocol | The security protocol for Azure Event Hub. If you require OAuth or OpenID, contact Chronosphere Support. | SASL_SSL |
SASL Mechanism | rdkafka.sasl.mechanism | The transport mechanism for the SASL connection. | PLAIN |
Memory Buffer Limit | mem_buf_limit | For pipelines with the Deployment or DaemonSet workload type only. Sets a limit for how much buffered data the plugin can write to memory, which affects backpressure. This value must follow Fluent Bit’s rules for unit sizes (opens in a new tab). If unspecified, no limit is enforced. | none |
Security and TLS
Name | Key | Description | Default |
---|---|---|---|
TLS | tls | If true , enables TLS/SSL. If false , disables TLS/SSL. Accepted values: true , false . | false |
TLS Certificate Validation | tls.verify | If on , and if tls is true , enables TLS/SSL certificate validation. If off , disables TLS/SSL certificate validation. Accepted values: on , off . | on |
TLS Debug Level | tls.debug | Sets TLS debug verbosity level. Accepted values: 0 (No debug), 1 (Error), 2 (State change), 3 (Informational), 4 (Verbose). | 1 |
CA Certificate File Path | tls.ca_file | Absolute path to CA certificate file. | none |
Certificate File Path | tls.crt_file | Absolute path to certificate file. | none |
Private Key File Path | tls.key_file | Absolute path to private key file. | none |
Private Key Path Password | tls.key_passwd | Password for private key file. | none |
TLS SNI Hostname Extension | tls.vhost | Hostname to be used for TLS SNI extension. | none |
Other
This parameter doesn’t have an equivalent setting in the Telemetry Pipeline web interface, but you can use it in pipeline configuration files.
Name | Key | Description | Default |
---|---|---|---|
none | buffer_max_size | Sets the maximum chunk (opens in a new tab) size for buffered data. If a single log exceeds this size, the plugin drops that log. | 4M |
Extended librdkafka parameters
This plugin uses the librdkafka (opens in a new tab)
library. Certain configuration parameters available through the Telemetry
Pipeline UI are based on librdkafka settings. These parameters generally use the
rdkafka.
prefix.
In addition to the parameters available through the Telemetry Pipeline UI, you can
customize any of the
librdkafka configuration properties (opens in a new tab)
by adding them to a pipeline configuration file. To do so, append the rdkafka.
prefix to the name of that property.
For example, to customize the socket.keepalive.enable
property, add the
rdkafka.socket.keepalive.enable
key to your configuration file.
Don’t use librdkafka properties to configure a pipeline’s memory buffer. Instead,
use the buffer_max_size
parameter.