Skip to content

ODE Confluent Cloud Support

Tony English edited this page Mar 2, 2022 · 4 revisions

(Content in development...)

What is Confluent Cloud?

Confluent Cloud is a fully managed Kafka service that is hosted on Confluent's cloud platform. This saves the user of the ODE the overhead of needing to manage the Kafka and Zookeeper deployments. There are also additional features that Confluent Cloud supports such as schema registry and ksqlDB that could become useful to the ODE in the future. For more information on Confluent Cloud and the additional services they provide, see this link.

How does the ODE utilize Confluent Cloud?

Currently, the ODE is able to connect to external Kafka deployments over SASL. This enables the ODE to behave identically but have its data go through Kafka hosted by Confluent Cloud instead (basic/standard/dedicated clusters). Currently, the ODE does not support additional Confluent Cloud features such as Schema Registry and ksqlDB.

The following ODE sub-modules now fully support SASL authentication:

When should Confluent Cloud be used for the ODE?

If you are finding it difficult to keep your Kafka/Zookeeper deployment stable with the data throughput that is required of your use case or project then it might be reasonable to consider Confluent Cloud. It is always good to ensure your Kafka deployments are properly configured first, but sometimes the overhead of managing the Kafka clusters challenging. This can especially be the case when deploying multiple brokers of Kafka in a Kubernetes environment. Offloading the overhead to a fully managed service can help. Confluent Cloud also has developer support.

Deploying the ODE with Confluent Cloud as the Kafka Broker

Deploying the ODE with Confluent Cloud as the targeted Kafka broker endpoint is possible to do once you have your own Confluent Cloud account with at least one Confluent Cloud environment and cluster.

1. Install Confluent CLI

  1. curl -L --http1.1 https://cnfl.io/cli | sh -s -- -b /usr/local/bin
  2. Verify installation with confluent version

2. Configure Confluent CLI

Follow the step-by-step guide on this page: Connect to Confluent. Remember to document the API key you generate and its secret. This is needed for the ODE environment variables.

3. Create ODE Kafka Topics

Confluent Cloud does not allow automatic topic creation so they must be created before the ODE is deployed. This can be done manually through the Confluent Cloud web portal or you can run the CLI command to do this. A shell script can be found in the project that you can run to generate all of the topics, each with XX partitions (8-10 is a good starting point).

4. Edit Environment Variables

The environment variables within your .env file must be properly set to have the ODE and its sub-modules be properly setup to connect to Confluent Cloud. Here is a brief overview of the required variables:

  • DOCKER_HOST_IP: must be set to the bootstrap server address (excluding the port). This can be found in your Confluent Cloud's "Cluster Settings" under "Bootstrap server".
  • KAFKA_TYPE: must be set to "CONFLUENT". This lets the project know to include the SASL authentication within the consumer and producer connections.
  • CONFLUENT_KEY: must be set to the API key being utilized for Confluent Cloud. This is the same key that was generated in step 2.
  • CONFLUENT_SECRET: must be set to the API secret being utilized for Confluent Cloud. This is the same secret that was generated in step 2.

5. Deploy with docker-compose

Just as the original ODE is deployed, the docker-compose command will be used to do the Confluent Cloud supported ODE deployment. Although there is a separate deployment YAML file for this deployment called docker-compose-confluent-cloud.yml. Run the following commands to perform the deployment:

  1. cp docker-compose.yml docker-compose-local.yml
  2. cp docker-compose-confluent-cloud.yml docker-compose.yml
  3. docker-compose up -d

Important Disclaimer: Confluent Cloud Pricing

Confluent Cloud is a helpful service that reduces the overhead of managing your ODE deployment but is not free and you will want to keep an eye on costs. Currently the ODE does not support the schema registry so messages will be sent to Confluent Cloud as plain text in the form of JSON, POJO, or the ASN1_Codec's XML. Confluent Cloud is billed based on data produced, data consumed, and data stored. It is important to perform some preliminary checks with your data throughput with a trial Confluent Cloud account before committing to Confluent Cloud. More information regarding their billing can be found here.

Releases

Change Notices

Informational Reference

  • Decode a file with asn1c
  • Deposit BSM to S3
  • Docker fix for SSL issues due to corporate network
  • Docker management
  • ECDSA Primer
  • Filter BSMs through PPM module
  • Geofence Filtering for PPM
  • Import BSMs from RSU log file
  • Import TIMs from RSU log file
  • jpo security svcs Integration
  • Link host directory to Docker directory
  • Migrating from SDW websocket depositor to SDW Depositor Submodule
  • ODE Release Deployment
  • ODE Release Preparation
  • Prepare a fresh Ubuntu instance for ODE installation
  • Process for Handling Bugs (Code Defects)
  • Run the ODE using the ASN codec module
  • Query RSU for set TIMs
  • Schema Version 6 Change Notice
  • Signed Message File Import
  • TIM REST Endpoint Changes
  • Using the .env configuration file
  • Using the ODE test harness

Test Procedures

  • Delete TIM on RSU test
  • Event Logger Test
  • Import Decode and Deliver BSM Test
  • Manage SNMP Test
  • Sending PDM to RSU Test
  • Sending TIM to RSU Test
  • Submit_TIM_To_SDW Test

Archived

  • Log File Changes (schemaVersion=4)
  • Receive BSMs over UDP
  • Receive ISD via UDP and deposit to SDC
  • Receive VSD via UDP and deposit to SDC
  • Run the crypto test vectors code with ODE team's OSS encoder
  • SchemaVersion 5 Change Notice
Clone this wiki locally