The successful application of semantic segmentation to radiofrequency (RF) spectrograms holds significant applications for spectrum sensing and serves as a foundational example showcasing the near-term feasibility of intelligent radio technology.
In this example, we use PyTorch and Lightning to train a segmentation model to identify and differentiate between 5G NR and 4G LTE signals within wideband spectrograms.
Qoherent's mission to drive the creation of intelligent radio technology requires a combination of open-source and proprietary tools. This example, which leverages open-source tools and machine learning frameworks to train on synthetic radio data generated using MATLAB, showcases our commitment to interoperability and our tool-agnostic approach to innovation.
Classification results are comparable to those reported by MathWorks' AI-based network. For more information, please refer to the following article by MathWorks: Spectrum Sensing with Deep Learning to Identify 5G and LTE Signals.
If you found this example interesting or helpful, don't forget to give it a star! ⭐
This example is provided as a Jupyter Notebook. You have the option to either run this example locally or in Google Colab.
To run this example locally, you'll need to download the project and dataset and set up a Conda virtual environment. If this seems daunting, we recommend running this example on Google Colab.
Please note that running this example locally will require approximately 10 GB of free space. Please ensure you have sufficient space available prior to proceeding.
-
Ensure that Git and Conda are installed on the computer where you plan to run this example. Additionally, if you'd like to accelerate model training with a GPU, you'll require CUDA.
-
Clone this repository to your local computer:
git clone https://github.com/qoherent/spectrogram-segmentation.git
- Create a Conda environment using the provided
environment.yml
file:
conda env create -f environment.yml
This will create a new Conda environment named spectrogram-segmentation
within the Conda installation directory.
- Active the environment:
conda activate spectrogram-segmentation
- Download and unpack the spectrum sensing dataset:
python download_dataset.py
This command will create a new directory named SpectrumSensingDataset
at the project's root. The
MathWorks Spectrum Sensing dataset will be downloaded and unpacked into this directory automatically.
- Register the environment kernel with Jupyter:
ipython kernel install --user --name=spectrogram-segmentation
- Open the notebook,
spectrogram_segmentation.ipynb
, specifying to use thespectrogram-segmentation
kernel:
jupyter notebook spectrogram_segmentation.ipynb --MultiKernelManager.default_kernel_name=spectrogram-segmentation
- Give yourself a pat on the back - you're all set up and ready to explore the example! For more information on navigating the Jupyter Notebook interface and executing code, please check out this tutorial by the Codecademy Team: How To Use Jupyter Notebooks.
Depending on your system specifications and the availability of a CUDA, running this example locally may take
several minutes. If a cell is taking too long to execute, you can interrupt its execution by clicking the "Kernel"
menu and selecting "Interrupt Kernel" or by pressing Ctrl + C
in the terminal where Jupyter Notebook is running.
- After you finish exploring, consider removing the dataset from your system and deleting the Conda environment to free up space. You can delete the Conda environment using the following command:
conda env remove --name spectrogram-segmentation
Coming soon: Don't want the hassle of downloading the project and dataset and setting up a Conda environment? We've shared the notebook on Google Colab: Spectrogram Segmentation.
We welcome contributions from the community! Whether it's an enhancement, bug fix, or improved explanation, your input is valuable. For significant changes, or if you'd like to prepare a separate tutorial, kindly contact us beforehand.
If you encounter any issues or to report a security vulnerability, please submit a bug report to the GitHub Issues page here.
Has this example inspired a project or research initiative related to intelligent radio? Please get in touch; we'd love to collaborate with you! 📡🚀
Finally, be sure to check out our open-source project: RIA Core (Coming soon!).
This work is a product of the collaborative efforts of the Qoherent team. Of special mention are Wan, Madrigal, Dimitrios, and Michael.
The dataset used in this example was prepared by MathWorks and is publicly available here. For more information on how this dataset was generated or to generate further spectrum data, please refer to MathWork's article on spectrum sensing. For more information about Qoherent's use of MATLAB to accelerate intelligent radio research, check out our customer story.
The DeepLabv3 models used in this example were initially proposed by Chen et al. and are further discussed
in their 2017 paper titled 'Rethinking Atrous Convolution for Semantic Image Segmentation'. The MobileNetV3
backbone used in this example was developed by Howard et al. and is further discussed in their 2019 paper titled
'Searching for MobileNetV3'. Models were accessed through torchvision
.
A special thanks to the PyTorch and Lightning teams for providing the foundational machine learning frameworks used in this example.