Overview
Learn how Augtera supports data ingestion from a third-party Kafka Broker
Last updated
Learn how Augtera supports data ingestion from a third-party Kafka Broker
Last updated
For cases where data is in Kafka, Augtera stack can be configured to process data from Kafka topics of interest. There are two deployment models to consider.
On-premise: In this mode, the Kafka cluster has restricted accessibility and is only reachable from locations within the local network. For this scenario, the collector stack is deployed at a location with access to Kafka brokers. Platform stack can be deployed at any location (private cloud, public cloud or a different on-premise location). Appropriate security policies must be in place to allow Augtera's collector stack to be able to connect to Augtera's platform stack. HTTPS is used by collector stack to connect to platform stack.
Off-premise: In this mode, the Kafka cluster is reachable from an off-premise location, such as Augtera's cloud. For this scenario, the collector stack is deployed at off-premise location. Appropriate security policies are recommended to allow Augtera's stack to access Kafka cluster. Use of TLS is recommended for this mode.
The following diagram shows the data flow from Kafka cluster to Augtera's stack.
Augtera also allows use of UDP to send data. However, you must pick only one transport mechanism, either UDP or Kafka, to transfer your data to the Augtera stack.
This section describes how Augtera decodes data ingested from Kafka.
Augtera configuration allows different protocols, such as sFlow, to be configured for ingestion. For each protocol, a different transport mechanism can be configured. For example, sFlow can be configured to use UDP and syslog can be configured to use Kafka. With UDP or TCP transport, the encoding of the data is implied by the protocol specification. For e.g., syslogs over UDP have a well defined standard specified in IETF RFC. But for data in Kafka, the encoding is not implied.
There are few scenarios to consider for data encoding.
Standard format: Data pushed into Kafka in standard format can be ingested by Augtera without need for any special parsers. For example, if syslogs in Kafka are to be ingested and if syslog pushed into kafka is in standard IETF format, Augtera requires no special configuration.
Flat JSON format: Data pushed into Kafka in flat JSON format can be ingested by Augtera without need for any special parsers. Flat JSON is encoding with only key,value pairs and with no nested hierarchy.
Proprietary JSON format: Data pushed into Kafka in any proprietary JSON format can be ingested by Augtera with special parsers. Special parsers are plugins developed by Augtera to decode proprietary format. These plugins can be programmed in real-time on the platform stack without requiring Augtera to develop and deploy new code.
Proprietary format: Data pushed into Kafka in any other format will require Augtera to develop and deploy a custom parser