MoE-LLM Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"