Tentative code: This code provides the implementations of Transformer-based Reinforcement Learning Hyper-parameter Optimization (TRL-HPO), which is the convergence of transformers and Actor-critic Reinforcement Learning. All the code documentation and variable definition mirrors the content of the manuscript published in IEEE Internet of Things Magazine.
The link to the paper (arxiv): https://arxiv.org/abs/2403.12237
The link to the paper (ieee): https://ieeexplore.ieee.org/document/10570354/
The functional scripts are as follows:
- Run
run.py
to train the model. - Run
analyze_results.py
to evaluate the trained model. - Run
explainability_results.py
to understand the models' results. - Run
flops_count.py
to output the FLOPS of the model.
The requirements are included in the requirements.txt
file. To install the packages included in this file, use the following command: pip install -r requirements.txt
Please feel free to contact me for any questions or research opportunities.
- Email: [email protected]
- GitHub: https://github.com/ibrahimshaer and https://github.com/Western-OC2-Lab
- LinkedIn: Ibrahim Shaer
- Google Scholar: Ibrahim Shaer and OC2 Lab