-
Notifications
You must be signed in to change notification settings - Fork 159
Demo Setup
Joongi Kim edited this page Feb 21, 2018
·
14 revisions
This meta-repository provides a docker-compose configuration to fire up a single-node Backend.AI cluster running on your PC (http://localhost:8081).
Note: This demo setup does not support GPUs.
All you have to do is:
- Clone the repository
docker-compose up -d
- Pull some kernel images to try out
To pull kernel images, just do it on your host Docker daemon like:
$ docker pull lablup/kernel-python:latest
$ docker pull lablup/kernel-python-tensorflow:latest-dense
$ docker pull lablup/kernel-c:latest
By default this demo cluster already has metadata/alias information for all publicly available Backend.AI kernels, so you don't have to manually register the pulled kernel information to the cluster but only have to pull those you want to try out.
To access this local cluster, set the following configurations to your favoriate Backend.AI client:
$ export BACKEND_ENDPOINT="http://localhost:8081"
$ export BACKEND_ACCESS_KEY="AKIAIOSFODNN7EXAMPLE"
$ export BACKEND_SECRET_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
With our official Python client, you can do:
$ backend.ai run python -c "print('hello world')"
✔ Session 9c737d84724173354fa10445d0b35fe0 is ready.
hello world
✔ Finished. (exit code = 0)
WARNING: This demo configuration is highly insecure. DO NOT USE in production!
- When launching a kernel, it says "Service Unavailable"!
- Each image has different default resource requirements. For example, TensorFlow images require 8 GiB or more RAM for your Docker daemon.
- What does the "dense" tag mean in the TensorFlow kernel images?
- Images with "dense" tags are optimized for shared multi-tenancy environments. There is no difference in functionalities.