A suite of PySpark, Pandas, and general pipeline utils for Reproducible Data Science and Analysis (RDSA) projects.
The RDSA team sits within the Economic Statistics Change Directorate, and uses cutting-edge data science and engineering skills to produce the next generation of economic statistics. Current priorities include overhauling legacy systems and developing new systems for key statistics. More information about work at RDSA can be found here: Using Data Science for Next-Gen Statistics.
rdsa-utils
is a Python codebase built with Python 3.8 and higher, and uses setup.py
, setup.cfg
, and pyproject.toml
for dependency management and packaging.
- Python 3.8 or higher
rdsa-utils
is available for installation via PyPI and can also be found on GitHub Releases for direct downloads and version history.
To install via pip
, simply run:
pip install rdsa-utils
The rdsa-utils
package is designed to make it easy to work with different platforms like Cloudera Data Platform (CDP) and Google Cloud Platform (GCP), as well as handle general Python tasks. Here's a breakdown of how everything is organised:
-
General Utilities (Top-Level):
- These are tools you can use for any project, regardless of the platform you're working on. They focus on common Python, PySpark, and Pandas tasks.
- 📂 Helpers: Handy functions that simplify working with Python and PySpark.
- 📂 IO: Functions for handling input and output, like reading configurations or saving results.
-
Platform-Specific Utilities:
- CDP (Cloudera Data Platform):
- 📂 Helpers: Functions that help you work with tools supported by CDP, such as HDFS, Impala, and AWS S3.
- 📂 IO: Input/output functions specifically for CDP, such as managing data and logs in CDP environments.
- GCP (Google Cloud Platform):
- 📂 Helpers: Functions to help you interact with GCP tools like Google Cloud Storage and BigQuery.
- 📂 IO: Input/output functions for managing data with GCP services.
- CDP (Cloudera Data Platform):
This structure keeps the tools for each platform separate, so you can easily find what you need, whether you're working in a cloud environment or on general Python tasks.
Our documentation is automatically generated using GitHub Actions and MkDocs. For an in-depth understanding of rdsa-utils
, how to contribute to rdsa-utils
, and more, please refer to our MkDocs-generated documentation.
While rdsa-utils
provides essential tools for data processing, it's just one part of the broader development process needed to build and maintain a robust, high-quality codebase. Following best practices and using the right tools are crucial for success.
We highly recommend checking out the following resources to learn more about creating Reproducible Analytical Pipelines (RAP), which focus on important areas such as version control, modular code development, unit testing, and peer review -- all essential for developing these pipelines:
-
Reproducible Analytical Pipelines (RAP) Resource - This resource offers an overview of Reproducible Analytical Pipelines, covering benefits, case studies, and guidelines on building a RAP. It discusses minimising manual steps, using open source software like R or Python, enhancing quality assurance through peer review, and ensuring auditability with version control. It also addresses challenges and considerations for implementing RAPs, such as data access restrictions or confidentiality, and underscores the importance of collaborative development.
-
Quality Assurance of Code for Analysis and Research - This book details methods and practices for ensuring high-quality coding in research and analysis, including unit testing and peer reviews.
-
PySpark Introduction and Training Book - An introduction to using PySpark for large-scale data processing.
Unless stated otherwise, the codebase is released under the MIT License. This covers both the codebase and any sample code in the documentation.
The documentation is © Crown copyright and available under the terms of the Open Government 3.0 licence.