The Cyber Incident Monitoring System is a comprehensive tool designed to provide real-time feeds of cyber incidents occurring in the cyber space within the Indian region. This tool aggregates data from various sources, including social media (e.g., Twitter), news websites, and other relevant feeds. It leverages machine learning to classify incidents and generates reports to help stakeholders understand the nature and frequency of cyber incidents in real-time.
This project consists of a full-stack application with a backend that handles data collection, processing, and storage, as well as a frontend that presents data in an interactive and user-friendly interface.
- Real-time Incident Collection: Scrapes data from sources such as Twitter and news websites.
- Incident Classification: Uses a machine learning model to classify cyber incidents.
- Visualization Dashboard: Provides a frontend dashboard for visualizing and interacting with the data.
- API Access: Offers a REST API for accessing incident data and reports.
- Configurable and Scalable: Supports configurations for deployment in different environments.
- React: For building the user interface.
- CSS & Bootstrap: For styling and responsive design.
- Flask: API server for handling requests.
- BeautifulSoup and Tweepy: For web scraping and social media data collection.
- MongoDB / PostgreSQL: For data storage.
- Celery and Airflow: For scheduling and orchestrating data ingestion tasks.
- scikit-learn: For building and training classification models.
- Pandas & NumPy: For data processing.
- Pickle: For saving and loading models.
- Docker: Containerization for consistent deployment.
- Kubernetes: For managing containerized applications (if needed).
- NGINX: As a reverse proxy for the backend.
- AWS/GCP: Cloud deployment and storage.
The project follows a microservices-based architecture, with each major component isolated for better scalability and maintainability. Here’s a quick overview:
- Frontend: A React-based web application that connects to the backend API to display incidents data in real-time.
- Backend: A Flask server that provides API endpoints for data ingestion, processing, and retrieval.
- Data Pipeline: A data pipeline that collects data from sources such as Twitter and news websites, processes it, and stores it in the database.
- Machine Learning: A trained model to classify incident types and severity based on historical data.
- Database: A MongoDB/PostgreSQL database that stores raw and processed incident data.
- Node.js and npm (for frontend)
- Python 3.7+
- Docker (for containerization)
- Cloud provider account (optional, for deployment)
-
Clone the Repository:
git clone https://github.com/your-username/cyber_incident_monitoring.git cd cyber_incident_monitoring
-
Setup Backend:
cd backend python3 -m venv venv source venv/bin/activate pip install -r requirements.txt
-
Setup Frontend:
cd ../frontend npm install
-
Configure Environment Variables:
- Copy
.env.example
to.env
and set up necessary environment variables for database connections, API keys, etc.
- Copy
-
Run Backend:
cd backend python app.py
-
Run Frontend:
cd ../frontend npm start
-
Database Setup:
- Initialize the database by running
init_db.py
script from thedatabase/
directory.
- Initialize the database by running
-
Run Data Pipeline:
- Start the data ingestion and processing pipeline using Airflow or Celery, as configured in the
data_pipeline/
folder.
- Start the data ingestion and processing pipeline using Airflow or Celery, as configured in the
Once the frontend and backend servers are running, open http://localhost:3000 in your browser to access the web application.
To collect and process real-time data:
python data_pipeline/jobs/ingest_data.py
The backend provides several API endpoints:
GET /api/incidents
: Retrieve a list of cyber incidents.POST /api/incidents
: Add a new cyber incident (for testing).GET /api/incidents/<id>
: Retrieve details of a specific incident.
Here are some core API endpoints for interacting with the backend.
-
Get All Incidents:
GET /api/incidents
-
Get Incident by ID:
GET /api/incidents/<id>
-
Add Incident (for testing purposes):
POST /api/incidents
You can deploy the whole stack using Docker Compose for local testing:
docker-compose up --build
-
Build Docker Image:
docker build -t your-image-name .
-
Push to Container Registry:
docker push your-image-name
-
Deploy to Kubernetes:
- Use the Kubernetes manifests in the
deployment/k8s/
folder to deploy on a Kubernetes cluster.
- Use the Kubernetes manifests in the
Content protected. Unauthorized use prohibited without explicit permission. Reach out for permissions before sharing or adapting this work . [email protected]