Skip to content

Platform for running control loop co-simulations and generation of building simulation models using EnergyPlus.

License

Notifications You must be signed in to change notification settings

ecobee/building-controls-simulator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Building Controls Simulator

Package for running control loop co-simulations and generation of building models using EnergyPlus.

For more information on EnergyPlus whole building simulation see here.

Installation and Setup

To setup this repo open up your bash terminal and follow the commands below. Ideally use SSH for git access. If you haven't set that up you can use HTTPS.

git clone
cd building-controls-simulator

Note for Windows users: It is recommended that you clone the repository to a directory that is as short as possible and does not contain spaces or other special characters. For example, clone to c:\devel\building-controls-simulator.

Minimal Docker version

$ docker --version
Docker version 19.03.13, build 4484c46d9d

$ docker-compose --version
docker-compose version 1.27.4, build 40524192

Quick Start Guide

This section contains the minimal set of commands as described below to get the examples and tests working. For explaination of these commands and trouble shooting see full installation and setup sections below.

Specify the .template files

Copy the template files:

cp .env.template .env
cp docker-compose.yml.template docker-compose.yml
cp .test.env.template .test.env

Edit in .env:

...
LOCAL_PACKAGE_DIR=<where you cloned the repo>
...

for example:

...
LOCAL_PACKAGE_DIR=/Users/tom.s/projects/building-controls-simulator
...

Run with docker-compose

First download latest pre-built container image from Dockerhub:

docker pull tstesco/building-controls-simulator:0.5.0-alpha

Start container and jupyter-lab server:

docker-compose up

You can now run notebooks, start with demo_LocalSource.ipynb.

Local Docker Setup

You're going to need Docker Desktop installed, if not see https://www.docker.com/. Docker Compose CLI is used to manage the containers and is included by default in the desktop versions of docker for all systems.

Using Docker-Compose

Required minimal versions:

$ docker --version
Docker version 19.03.13, build 4484c46d9d

$ docker-compose --version
docker-compose version 1.27.4, build 40524192

docker-compose.yml defines the Dockerfile and image to use, ports to map, and volumes to mount. It also specifies the env file .env to inject environment variables that are needed both to build the container and to be used inside the container. As a user all you need to know is that any API keys or GCP variables are stored here (safely) the default EnergyPlus version is 9-4-0, and this can be changed later very easily.

Copy the template files and fill in the variables mentioned below:

cp .env.template .env
cp docker-compose.yml.template docker-compose.yml
# and if you want to run the tests
# .test.env does not need to be editted, unless you want to inject creds
cp .test.env.template .test.env

Note: docker-compose behaviour may be slightly different on your host OS (Windows, Mac OS, Linux) with respect to how the expansion of environment variables works. If the base docker-compose.yml file fails on interpreting variables, try inlining those specific variables, e.g. replacing ${LOCAL_PACKAGE_DIR} with <where you cloned the repo to>/building-controls-simulator for example, LOCAL_PACKAGE_DIR=/Users/tom.s/projects/building-controls-simulator.

Edit in .env:

...
LOCAL_PACKAGE_DIR=<where you cloned the repo>
...

for example:

...
LOCAL_PACKAGE_DIR=/Users/tom.s/projects/building-controls-simulator
...

Now you're ready to build and launch the container!

If you delete the docker image just go through the setup here again to rebuild it.

Pull Docker image from Dockerhub

You can access the latest release image from: https://hub.docker.com/r/tstesco/building-controls-simulator/tags via CLI:

docker pull tstesco/building-controls-simulator:0.5.0-alpha

If you are using the Dockerhub repository make sure that your .env file contains the line

DOCKERHUB_REPOSITORY=tstesco

This allows docker-compose.yml to find and use the correct image. Change this line in docker-compose.yml if you want to use a locally built image.

    # change this if want to build your own image
    image: ${DOCKERHUB_REPOSITORY}/${DOCKER_IMAGE}:${VERSION_TAG}

to

    # change this if want to build your own image
    image: ${DOCKER_IMAGE}:${VERSION_TAG}
Note: Locally built Docker images may use up to 10 GB of disk space - make sure you have this available before building.

The size of the container image can be reduced to below 5 GB by not installing every EnergyPlus version in scripts/setup/install_ep.sh and not downloading all IECC 2018 IDF files in scripts/setup/download_IECC_idfs.sh. Simply comment out the versions/files you do not need in the respective files.

Run BCS with Jupyter Lab Server (recommended: option 1)

A jupyter-lab server is setup to run when the container is brought up by docker-compose up. This is accessible locally at: http://localhost:8888/lab.

docker-compose up will also build the image if it does not exist already, and then run scripts/setup/jupyter_lab.sh.

Stopping or exiting the container will also shutdown the jupyter-lab server.

docker-compose up

The container can be shutdown using another terminal on the host via:

docker-compose down

Configure EnergyPlus version

Using this flow of docker-compose up and docker-compose down you can modify the scripts/setup/.bashrc file line that targets the EnergyPlus version, by default this is "8-9-0" which is the minimum version supported.

. "${PACKAGE_DIR:?}/scripts/epvm.sh" "<x-x-x>"

Run BCS with interactive bash shell (alternative: option 2)

The docker-compose run command does most of the set up and can be used again to run the container after it is built. The --service-ports flag should be set to allow access on your host machine to jupyter-lab, see: https://docs.docker.com/compose/reference/run/

# this command runs the container and builds it if it cannot be found (only need to do this once!)
# this will take ~30 minutes, mostly to download all desired versions of EnergyPlus
# perfect opportunity for a coffee, water, or exercise break
docker-compose run --service-ports building-controls-simulator bash

# select the version of EnergyPlus to use in current environment, this can be changed at any time
# EnergyPlus Version Manager (epvm) script changes env variables and symbolic links to hot-swap version
# by default .bashrc sets version to 9-4-0.
. scripts/epvm.sh 9-4-0

# you're done with container setup! now exit container shell or just stop the docker container
# unless you specifically delete this docker container it can be restarted with the setup already done
exit    # first exit to get out of pipenv shell
exit    # second exit to get out of container shell

There is also a background script scripts/setup/jupyter_lab_bkgrnd.sh if you would like to run the jupyter-lab server from bash tty and keep your prompt available.

docker-compose run --service-ports building-controls-simulator bash
# in container, enter virtual env
pipenv shell
# then start jupyter lab server in background
. scripts/setup/jupyter_lab_bkgrnd.sh

Open bash shell in running container

If you've run the container with docker-compose up or docker-compose run and need an interactive bash shell inside, lookup the container id with docker ps then run:

docker exec -it <running container id> bash

Authentication with GCP

GCP credentials are not required to use the BCS but make accessing data much easier. If you do not have credentials but have local access to data see section below.

First authenticate normally to GCP, e.g. using gcloud auth. Then copy ${GOOGLE_APPLICATION_CREDENTIALS} into the container to access GCP resources with the same permissions.

On host machine:

# on local machine copy credentials to container
docker cp ${GOOGLE_APPLICATION_CREDENTIALS} <container ID>:/home/bcs/.config/application_default_credentials.json

Within container:

# in container make sure bcs user can read credentials
sudo chown "bcs":"bcs" ~/.config/application_default_credentials.json

Using locally cached data

Instead of using GCP access to download data you can use a locally cached DYD files following the format: data/input/local/<hashed ID>.csv.zip. These data files are the time series measurements for an individual building.

Simply save the files using this format and you can use them in local simulations.

See src/python/BuildingControlsSimulator/DataClients/test_LocalSource.py and notebooks/demo_LocalSource.ipynb for example usage.

Docker Issues

Some issues that have occurred on different machines are:

Build issues

  • incompatible versions of Docker and Docker Compose (see requirements above).
  • .env variables unset, make sure all .env variables not specified in .env.template are matched correctly to your host system.
  • windows line endings in .env file.
  • apt-get install failing or other packages not being found by apt-get
    • Verify network connection and build container again
  • jupyter lab build failing
    • try setting in Dockerfile command jupyter lab build --dev-build=False --minimize=False.

Troubleshooting file permissions issues

  1. After switching branches on host machine mounted volumes give permissions errors when access is attempted within docker container.
  2. After making any changes to docker, restart the docker desktop daemon.
  3. Even if you didn't make an changes, stopping your container, restarting your terminal, and restarting the docker daemon, then restarting the container can alleviate issues.

Usage

Example Notebook (Hello World): demo_LocalSource.ipynb

Open in jupyterlab notebooks/demo_LocalSource.ipynb and run all cells. This demo shows the usage of the building controls simulator and will download necessary data from www.energyplus.net and generate input data for the simulation.

Example Notebook with Donate Your Data (DYD): demo_GCSDYDSource.ipynb

Support for ecobee Donate Your Data (DYD) is included with the GCSDYDSource. For example usage see notebooks/demo_GCSDYDSource.ipynb. The GCSDYDSource supports using a local cache of the data files. Simply copy them using format data/cache/GCSDYD/<hash ID>.csv.zip, for example:

$ ls data/cache/GCSDYD
2df6959cdf502c23f04f3155758d7b678af0c631.csv.zip
6e63291da5427ae87d34bb75022ee54ee3b1fc1a.csv.zip
4cea487023a11f3bc16cc66c6ca8a919fc6d6144.csv.zip
f2254479e14daf04089082d1cd9df53948f98f1e.csv.zip
...

For information about the ecobee DYD program please see: https://www.ecobee.com/donate-your-data/.

Development setup - Using VS Code Remote Containers

Highly recommend VS Code IDE for development: https://code.visualstudio.com/download If you're not familar with VS Code for Python develpoment check out this guide and PyCon talk and guide at: https://pycon.switowski.com/01-vscode/

The Remote Containers extension adds the Remote Explorer tool bar. This can be used to inspect and connect to available Docker containers.

  1. docker-compose up to run the building-controls-simulator container.
  2. Right click on building-controls-simulator container (will be in "Other Containers" first time) and select "Attach to Container". This will install VS Code inside the container.
  3. Install necessary extensions within container: e.g. "Python" extension. The container will be accessible now in "Dev Containers" section within Remote Explorer so the installation only occurs once per container.
  4. Use the VS Code terminal to build and run tests, and edit files in VS code as you would on your host machine.

Deleting and rebuilding the container

Should something go wrong with the container or it experience an issue during the build remove the broken containers and images with these docker commands:

# first list all containers
docker ps -a

# stop containers if they are still running and inaccessible
docker stop <container ID>

# remove containers related to failed build
docker rm <container ID>

# list docker images
docker images

# remove docker image
docker rmi <image ID>

Run container with interactive bash tty instead of auto-starting jupyter-lab

Start bash tty in container:

# --rm removes container on exit
# --service-ports causes defined ports to be mapped
# --volume maps volumes individually
source .env
docker-compose run \
    --rm \
    --service-ports \
    --volume=${LOCAL_PACKAGE_DIR}:${DOCKER_PACKAGE_DIR}:consistent\
building-controls-simulator bash

The advantage over using docker run (though very similar) is automatic sourcing of the .env environment variables and ports configured in docker-compose.yml.

Run container without docker-compose

Keep in mind this will not mount volumes.

docker run -it -p 127.0.0.1:8888:8888 <IMAGE_ID> bash

Jupyterlab needs to be run with:

jupyter-lab --ip="0.0.0.0" --no-browser

Run the tests

Test files are found in src/python directory alongside source code, they are identified by the naming convention test_*.py. The pytest framework used for testing, see https://docs.pytest.org/en/stable/ for details.

Similarly to the .env file, you can set up .test.env from .test.env.template. Then simply run the test_env_setup.sh script to set up the test environment.

. scripts/setup/test_env_setup.sh

Finally, run all the tests:

python -m pytest src/python

Changing dependency versions

The dependencies are pinned to exact versions in the requirements.txt file. To change this simply change line (approx) 124 in the Dockerfile from:

    && pip install --no-cache-dir -r "requirements.txt" \
    # && pip install --no-cache-dir -r "requirements_unfixed.txt" \

to

    # && pip install --no-cache-dir -r "requirements.txt" \
    && pip install --no-cache-dir -r "requirements_unfixed.txt" \

This will install the latest satisfying versions of all dependencies. After testing that the dependencies are working freeze them into a new requirements.txt file.

pip freeze > requirements.txt

Several dependencies are installed from source so these must be removed from the requirements.txt file. These may be:

PyFMI
Assimulo
hpipm-python

Then change line 124 in the Dockerfile back to use the requirements.txt file. Note that when building the image using the requirements.txt file it will add the pinned dependencies to the Pipfile, discard those changes.

Making a Release

  1. Commit changes to master, reference new version number.
  2. Increment version number in .env.template and setup.py. Use semver (https://semver.org/) convention for release versioning.
  3. On GitHub use the releases/new workflow (https://github.com/ecobee/building-controls-simulator/releases/new).
  4. Build docker image locally.
  5. Run tests.
  6. Tag release
    docker tag <IMAGE_ID> tstesco/building-controls-simulator:<VERSION>
  7. Push docker image to dockerhub (https://hub.docker.com/repository/docker/tstesco/building-controls-simulator)
    docker push tstesco/building-controls-simulator:<VERSION>

Weather Data

There are several data sources that can be used. The WeatherSource provides methods to get weather data required for simulations and preprocess it for use in simulation.

The Energy Plus Weather (EPW) format is used and is described in the linked NREL technical report: https://www.nrel.gov/docs/fy08osti/43156.pdf

EnergyPlus EPW Data

The simpliest data source for EPW formated TMY data is from the EnergyPlus website: https://energyplus.net/weather.

NREL NSRDB

The current NSRDB has TMY and PSM3 data available through its developer API. This however does not contain all fields required by the EPW format so those fields must be back filled with archive TMY3 data from the nearest weather station.

NSRDB PSM3: https://developer.nrel.gov/docs/solar/nsrdb/psm3-download/ NSRDB PSM3 TMY: https://developer.nrel.gov/docs/solar/nsrdb/psm3-tmy-download/

For potential future integration.

Configuration

The .bashrc at scripts/setup/.bashrc can be configured similar to any .bashrc file. It simply runs commands (rc) whenever an interactive bash shell is opened.

Building the Documentation

To build documentation in various formats, you will need Sphinx and the readthedocs theme.

cd docs/
make clean
make html

The html files are then available in docs/build/html. Open the root file index.html in a web browser to view them locally.

External Tools

Contributing

See notes on how to develop this project in CONTRIBUTING.md

Communication

GitHub issues: bug reports, feature requests, install issues, RFCs, thoughts, etc.

License

Building Controls Simulator is licensed under a BSD-3-clause style license found in the LICENSE file.

About

Platform for running control loop co-simulations and generation of building simulation models using EnergyPlus.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published