This project is designed to demonstrate how to build a Confluent Kafka Pipeline that facilitates the collection, transformation, and storage of sensor data using Kafka and MongoDB. It provides a step-by-step guide on how to publish sensor data to Kafka topics, stream and transform the data using Confluent Kafka Streams, and finally, consume and store the data in a MongoDB database.
How to setup confluent Kafka.
To use confluent kafka we need following details from Confluent dashboard.
confluentClusterName = ""
confluentBootstrapServers = ""
confluentTopicName = ""
confluentApiKey = ""
confluentSecret = ""
confluentSchemaApiKey = ""
confluentSchemaSecret = ""
endpoint = ""
To consume confluent kafka data we need following details from MongoDB Atlas.
MONGO_DB_URL = "mongodb+srv://Rohii:<password>@cluster9.fgrr4ct5.mongodb.net/?retryWrites=true&w=majority"
- Python
- Bash
- MongoDB
Step 1: Create a conda environment
conda --version
Step 2: Create a conda environment
conda create -p venv python==3.10 -y
step 3:
conda activate venv/
Step 4:
pip install -r requirements.txt
step 5:
Run producer_main.py to prdouce data from data source to topics in json
step 6:
Run consumer_main.py to consume data from Confluent Kafka to MongoDB in json format
If you'd like to contribute to this project, please follow these guidelines:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them with descriptive messages.
- Push your branch to your fork.
- Create a pull request to merge your changes into the main branch of this repository.
This project is licensed under the MIT License. Feel free to use, modify, and distribute it as needed.