Skip to content

ansible/ansible-ai-connect-service

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Ansible AI Connect logo

Ansible AI Connect

This repository contains the application that serves Ansible task suggestions for consumption by the Ansible VS Code extension.

Getting started

Install dependencies

  1. Podman Desktop
  2. Ollama
  3. VS Code and the Ansible extension

Start the model server

export OLLAMA_HOST=0.0.0.0
ollama serve &
ollama run mistral:instruct

Set environment variables

Populate the tools/docker-compose/.env file with the following values:

DEPLOYMENT_MODE="upstream"
SECRET_KEY="somesecretvalue"
ENABLE_ARI_POSTPROCESS="False"
WCA_SECRET_BACKEND_TYPE="dummy"
# configure model server
ANSIBLE_AI_MODEL_MESH_API_URL="http://host.containers.internal:11434"
ANSIBLE_AI_MODEL_MESH_API_TYPE="ollama"
ANSIBLE_AI_MODEL_MESH_MODEL_ID="mistral:instruct"

Start service and dependencies

podman compose -f tools/docker-compose/compose.yaml --env-file ./.env up

Configure VS Code to connect to your local machine

  1. Navigate to File (or Code on macOS) > Preferences (or Settings on macOS) > Settings
  2. Search for Lightspeed
  3. For both User and Workspace, set the following setting values:
    1. Ansible > Lightspeed: Enabled ☑️
    2. Ansible > Lightspeed: URL: http://localhost:8000
    3. Ansible > Lightspeed: Suggestions: Enabled ☑️

Stop service and dependencies

podman compose -f tools/docker-compose/compose.yaml down

Development

Project structure

Path Description
ansible_ai_connect Service backend application
ansible_ai_connect_admin_portal Admin portal application
ansible_ai_connect_chatbot Chatbot application

Service configuration

Secret storage

For most development usages, you can skip the call to AWS Secrets Manager and always use the dummy WCA_SECRET_BACKEND_TYPE by setting the following in your tools/docker-compose/.env file:

WCA_SECRET_BACKEND_TYPE="dummy"
WCA_SECRET_DUMMY_SECRETS="11009103:valid"

In this example, 11009103`` is your organization id. In this case the model is set to valid. You can also use the following syntax to set both the model and set key_id: WCA_SECRET_DUMMY_SECRETS='11009103:ibm_api_keymodel_id<|sepofid|>model_name'`

For deployment and RH SSO integration test/development, add the following to your tools/docker-compose/.env file:

DEPLOYMENT_MODE=saas
WCA_SECRET_MANAGER_ACCESS_KEY=<access-key>
WCA_SECRET_MANAGER_KMS_KEY_ID=<kms-key-id>
WCA_SECRET_MANAGER_PRIMARY_REGION=us-east-2
WCA_SECRET_MANAGER_REPLICA_REGIONS=us-west-1
WCA_SECRET_MANAGER_SECRET_ACCESS_KEY=<secret-access-key>

See here for details.

Admin Portal

This repository also contains a React/TypeScript webapp for the "Admin Portal". This is located in the ansible_ai_connect_admin_portal directory. Further details can be found in ansible_ai_connect_admin_portal/README.md. If you wish to run the "Admin Portal" locally it is important to read the instructions.

Chatbot

The ansible_ai_connect_chatbot directory contains a React/TypeScript webapp for the "Chatbot" UI.

Refer to ansible_ai_connect_chatbot/README.md for further details.

Debugging

The service can be run in debug mode by exporting or adding to the command line the variable DEBUG=True.

The Django service listens on http://127.0.0.1:8000.

Note that there is no pytorch service defined in the docker-compose file. You should adjust the ANSIBLE_AI_MODEL_MESH_API_URL configuration key to point on an existing service.

To interact with the WCA key management API, or use WCA commercial inference locally, you need to add the following variables to you environment file:

WCA_SECRET_MANAGER_ACCESS_KEY=<AWS access key>
WCA_SECRET_MANAGER_SECRET_ACCESS_KEY=<AWS secret access key>
WCA_SECRET_MANAGER_KMS_KEY_ID=<KMS key id or alias>
WCA_SECRET_MANAGER_PRIMARY_REGION=us-east-2
WCA_SECRET_MANAGER_REPLICA_REGIONS=us-west-1

The AWS key and secret key must belong to a user who has both the AnsibleWisdomWCASecretsReader and AnsibleWisdomWCASecretsWriter policies.

The KMS Secret needs to either be a multi region secret (when using the id) or a secret with the same name in the primary and replica regions (when using the alias).

Note: when using a KMS key alias, prefix with alias/<actual alias>.

Refer to the set up document for the AWS accounts and secrets.

Deploy the service via OpenShift S2I

oc new-build --strategy=docker --binary --name wisdom-service
oc start-build wisdom-service --from-dir . --exclude='(^|\/)(.git|.venv|.tox|model)(\/|$)' --wait=true
oc new-app wisdom-service

Testing the completion API

The sample request below tests the task suggestion prediction API provided by the Django application. This is the same request the VS Code extension will make.

Request:

# Post a request using curl
curl -X 'POST' \
  'http://127.0.0.1:8000/api/v0/ai/completions/' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
    "prompt": "---\n- hosts: all\n  tasks:\n  - name: Install nginx and nodejs 12 Packages\n"
    }'

Response:

{
  "predictions": [
    "- name: ansible Convert instance config dict to a list\n      set_fact:\n        ansible_list: \"{{ instance_config_dict.results | map(attribute='ansible_facts.instance_conf_dict') | list }}\"\n      when: server.changed | bool\n"
  ]
}

Using pre-commit

Pre-commit should be used before pushing a new PR. To use pre-commit, you need to first install the pre-commit package and its dependencies by running:

pip install -r requirements-dev.txt

To install pre-commit into your git hooks and run the checks on every commit, run the following each time you clone this repo:

pre-commit install

To update the pre-commit config to the latest repos' versions and run the precommit check across all files, run:

pre-commit autoupdate && pre-commit run -a

Updating the Python dependencies

We are now using pip-compile in order to manage our Python dependencies for the x86_64 and ARM64/AArch64 architectures.

In order to generate requirements.txt files for both architectures, you must use a multi-arch capable virtual machine emulator (like QEMU) and enable multi-arch support.

To enable multi-arch support, run the instructions for your container engine and emulator from this table:

Container Engine Emulator Instructions
podman QEMU
podman machine ssh
sudo rpm-ostree install qemu-user-static
sudo systemctl reboot

The specification of what packages we need now live in the requirements.in and requirements-dev.in files. Use your preferred editor to make the needed changes in those files, then run

make pip-compile

This will spin up a container and run the equivalent of these commands to generate the updated files:

pip-compile requirements.in
pip-compile requirements-dev.in

These commands will produce fully populated and pinned requirements.txt and requirements-dev.txt files, containing all of the dependencies of our dependencies involved. Due to differences in architecture and version of Python between developers' machines, we do not recommend running the pip-compile commands directly.

Use of pyproject.toml

pyproject.toml contains the dependencies used by downstream builds. Changes to any of the top level dependencies in requirements.in must there also be reflected in pyproject.toml too. See PEP-518 for details.

Using the VS Code extension

Install the latest Ansible VS Code Extension from the Visual Studio Marketplace.

In order to successfully connect to your local dev environment using the plugin, you need to create the OAuth2 application in Django. Open a shell session in the Django container using

docker exec -it docker-compose_django_1 bash

and then run the Django command to create the application:

  wisdom-manage createapplication \
    --name "Ansible AI Connect for VS Code" \
    --client-id Vu2gClkeR5qUJTUGHoFAePmBznd6RZjDdy5FW2wy \
    --redirect-uris "vscode://redhat.ansible vscodium://redhat.ansible vscode-insiders://redhat.ansible code-oss://redhat.ansible checode://redhat.ansible" \
    public authorization-code

This sets up a matching client ID to the one that is coded directly into the VS Code extension.

Review the screen recording for instruction on configuring the VS Code extension to access your running wisdom service.

Note: If, after running python manage.py runserver you encounter an AssertionError, use the following command: python manage.py runserver --noreload. You can also disable it by adding INSTALLED_APPS = [i for i in INSTALLED_APPS if i not in ["django_prometheus"]] to the ansible_wisdom/main/settings/development.py file.

Authenticating with the completion API

The wisdom service supports both GitHub and Red Hat authentication. GitHub authentication can be open to all GitHub users, or limited to a specific team. The following directions are for configuring the service to grant access to any GitHub user.

Authenticate with GitHub

To test GitHub authentication locally, you will need to create a new OAuth App at https://github.com/settings/developers. Provide an Authorization callback URL of http://localhost:8000/complete/github/. Export Update SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET before starting your app. SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET correspond to the Client ID and Client Secret respectively, both of which are provided after creating a new OAuth App. If you are running with the compose development environment described below, put these env vars in a .env file in the tools/docker-compose directory.

Authenticate with Red Hat

To test Red Hat authentication locally, you will need to export 3 variables before start the application.

export SOCIAL_AUTH_OIDC_OIDC_ENDPOINT="https://sso.redhat.com/auth/realms/redhat-external"
export SOCIAL_AUTH_OIDC_KEY="ansible-wisdom-staging"
export SOCIAL_AUTH_OIDC_SECRET=secret_value

If, to run the application, you are using Makefile first two variables will be set automatically. Third one should be set manually in either way: you are using Makefile or running application directly. To get the secret value go to AWS Secret Manager select it-cloud-aws-ansible-wisdom-staging to log in. Click on Secrets Manager than wisdom and at last Retrieve secret value than copy value of SOCIAL_AUTH_OIDC_SECRET.

After authentication

Once you start the app, navigate to http://localhost:8000/ to log in. Once authenticated, you will be presented with an authentication token that will be configured in VS Code (coming soon) to access the task prediction API.

Warning: The Django runserver command launches ansible-wisdom-service as http://127.0.0.1:8000. It is important that the host name used in your browser and that configured in GitHub OAuth Authorization callback URL are identical. Should you use Django's link the GitHub OAuth Authorization callback URL will need to use 127.0.0.1 in lieu of localhost too.

To get an authentication token, you can run the following command:

podman exec -it docker-compose-django-1 wisdom-manage createtoken --create-user

Note: If using docker-compose, the container might have a different name such as docker-compose-django-1 in which case the command would be:

podman exec -it docker-compose-django-1 wisdom-manage createtoken --create-user
  • my-test-user will be create for you
  • my-token is the name of the token

To get an authentication token without logging in via GitHub, you also:

  1. create an admin user
  2. navigate to http://localhost:8000/admin/
  3. and log in with your superuser credentials
  4. then navigate to http://localhost:8000/admin/oauth2_provider/accesstoken/
  5. create a new token.

To test the API with no authentication, you can empty out REST_FRAMEWORK.DEFAULT_PERMISSION_CLASSES in base.py.

Enabling postprocess with ARI

You can enable postprocess with Ansible Risk Insight (ARI) for improving the completion output just by following these 2 steps below.

  1. Set the environment variable ENABLE_ARI_POSTPROCESS to True

    $ export ENABLE_ARI_POSTPROCESS=True
  2. Prepare rules and data directory inside ari/kb directory.

    rules should contain mutation rules for the postprocess, you can refer to here for some examples.

    data should contain the backend data for ARI. We will host this data somewhere in the future, but currently this file must be placed manually if you want to enable the postprocess.

    Once the files are ready, the ari/kb directory should look like this.

    ari/kb/
    ├── data
    │   ├── collections
    │   └── indices
    └── rules
        ├── W001_module_name_metrics.py
        ├── W002_module_key_metrics.py
        ├── ...

Then you can build the django image or just run make docker-compose.

Enabling postprocess with Ansible Lint

You can enable postprocess with Ansible Lint for improving the completion output just by setting the environment variable ENABLE_ANSIBLE_LINT_POSTPROCESS to True Note: Ansible lint post-processing is available only to commercial users.

Application metrics as a Prometheus-style endpoint

We enabled the Prometheus endpoint to scrape the configuration and check the service status to build observability into the service for monitoring and measuring its availability.

To provide feedback for operational needs as well as for continuous service improvement.

Swagger UI, ReDoc UI and OpenAPI 3.0 Schema

Swagger UI

Swagger UI is available at http://localhost:8000/api/schema/swagger-ui/ in the development environment only.

  • Note: It is not enabled in the production environment regardless of any settings.

If you want to test APIs using Swagger UI,

  1. Open http://localhost:8000/ and get an authentication token by following the instructions described in the Authenticating with the completion API section.
  2. Open http://localhost:8000/api/schema/swagger-ui/
  3. Click the Authorize button.
  4. Input the authentication token for the tokenAuth as it is. You do not need to add any prefixes, such as Bearer or Token .
  5. Click Authorize.
  6. Click Close to go back to the Swagger UI page.
  7. Expand a section for the API that you want to try and click Try it out.
  8. Input required parameters (if any) and click Execute.

ReDoc UI

Another OpenAPI UI in the ReDoc format is also available at http://localhost:8000/api/schema/redoc/ in the development environment only.

OpenAPI 3.0 Schema

The static OpenAPI Schema YAML file is stored as /tools/openapi-schema/ansible-ai-connect-service.yaml in this repository.

When you make code changes, please update the static OpenAPI Schema YAML file with the following steps:

  1. Ensure the API metadata (description, version, TAGS used to organize API categories) are accurate; this requires updating the SPECTACULAR_SETTINGS variable in ansible_wisdom/main/settings/development.py.
  2. Run the wisdom service locally.
  3. Run make update-openapi-schema in the project root.
  4. Commit the updated OpenAPI Schema YAML file with your API changes.

Also a dynamically generated OpenAPI 3.0 Schema YAML file can be downloaded either:

  • Click the /api/schema/ link on Swagger UI, or
  • Click the Download button on ReDoc UI

Connect to additional model servers

Connect to a local model server

To connect to the Mistal 7b Instruct model running on locally on llama.cpp modelserver:

  1. Download the Mistral-7b-Instruct llamafile
  2. Make it executable and run it ($YOUR_REAL_IP is your local IP address, NOT 127.0.0.1 or localhost)
    chmod +x ./mistral-7b-instruct-v0.2-Q5_K_M-server.llamafile
    ./mistral-7b-instruct-v0.2-Q5_K_M-server.llamafile --host $YOUR_REAL_IP
  3. Set the appropriate environment variables
    ANSIBLE_AI_MODEL_MESH_API_URL=http://$YOUR_REAL_IP:8080
    ANSIBLE_AI_MODEL_MESH_API_TYPE=llamacpp
    ANSIBLE_AI_MODEL_MESH_MODEL_ID=mistral-7b-instruct-v0.2.Q5_K_M.gguf
    ENABLE_ARI_POSTPROCESS=False

Testing

Test cases

Unit-tests are based on Python's unittest library and rely on Django REST framework APIClient.

Unit-test Guidelines

  • Use reverse():

    Please make use of Django's reverse() function to specify which view you are hitting. If and when we change the path some endpoint is at, the person making the change will appreciate not having to search and replace all of those strings.

    Additionally if you are hitting the same endpoint over a bunch of methods on the same test class, you can always store the results of reverse() in an attribute and make use of that, to reduce the repetition.

Execute Unit Tests and Measure Code Coverage

Preparation

For running Unit Tests in this repository, you need to have backend services (Postgres, Prometheus and Grafana) running. Running them from container is one handy way for that requirement.

For getting the unit test code coverage, install the coverage module, which is included in requirements-dev.txt with the instructions in the Using pre-commit section.

Use make

The easiest way to run unit tests and measure code coverage report is to run

make code-coverage

If the execution was successful, results in HTML are showin in Chrome.

Running Unit Tests from Command Line or PyCharm

For executing unit tests from command line, You need to set some environment variables that are read by the Service. If you are using PyCharm for development, you can use the EnvFile plugin with the following .env file:

LAUNCHDARKLY_SDK_KEY=flagdata.json
ANSIBLE_AI_DATABASE_HOST=localhost
ANSIBLE_AI_DATABASE_NAME=wisdom
ANSIBLE_AI_DATABASE_PASSWORD=wisdom
ANSIBLE_AI_DATABASE_USER=wisdom
ARI_KB_PATH=../ari/kb/
DJANGO_SETTINGS_MODULE=ansible_wisdom.main.settings.development
ENABLE_ARI_POSTPROCESS=True
PYTHONUNBUFFERED=1
SECRET_KEY=somesecret

Note that this .env file assumes that the Django service is executed in the ansible_wisdom subdirectory as ARI_KB_PATH is defined as ../ari/kb.

It is recommended to use make to run unit tests since it helps to configure default values. If you want to execute only specific file/class/method you can use $WISDOM_TEST variable:

make test
WISDOM_TEST="ansible_wisdom.main.tests.test_views.LogoutTest" make test

Alternatively if you want to run unit tests manually, export variables from .env as environment variables. It can be done with following commands:

set -o allexport
source .env
set +o allexport

After that, it is possible to run tests using standard django test mechanism:

cd ansible_wisdom
python3 manage.py test

Measuring Code Coverage from Command Line

If you want to get code coverage by running unit tests from command line, set environment variables listed in the section above and run following commands:

cd ansible_wisdom
coverage run --rcfile=../setup.cfg manage.py test

After tests completed, run

coverage report

for showing results on console, or

coverage html

to generate HTML reports under htmlcov directory.

Alternatively you can run the following command for code coverage:

make code-coverage

Utilities

Backup/restore the database (Podman)

You can do a backup and restore the database with the following scripts:

  • ./tools/scripts/dump-db.sh
  • ./tools/scripts/restore-db.sh

E.g:

./tools/scripts/dump-db.sh /tmp/my-backup.dump
./tools/scripts/restore-db.sh /tmp/my-backup.dump

Troubleshooting

Permission denied errors

If you get a permission denied error when attempting to start the containers, you may need to set the permissions on the ansible_wisdom/, prometheus/ and ari/ directories:

chcon -t container_file_t -R ansible_wisdom/
chcon -t container_file_t -R prometheus/
chcon -t container_file_t -R grafana/
chcon -t container_file_t -R ari/

Also run chmod against the ari/ directory so that ARI can write temporary data in it:

chmod -R 777 ari/

If your django container build fails with the following error, you've probably run out of memory running webpack.

STEP 30/46: RUN npm --prefix /tmp/ansible_ai_connect_admin_portal run build

> [email protected] build
> node scripts/build.js

Creating an optimized production build...
npm ERR! path /tmp/ansible_ai_connect_admin_portal
npm ERR! command failed
npm ERR! signal SIGKILL
npm ERR! command sh -c -- node scripts/build.js

You can increase the memory of your existing podman machine by issuing the following:

podman machine set --memory 8192

Recreating the dev containers might be useful:

podman compose -f tools/docker-compose/compose.yaml down

It may be necessary to recreate the dev image if anything has changed in the nginx settings:

podman rmi localhost/docker-compose_django_1

Too many open files error

If you encounter a 'too many open files' error when building containers or starting the development environment, you must increase the maximum number of files a process is allowed to open. On most platforms this is done using the ulimit command.

ulimit -n <a larger value>

ex: the following will set the number of allowed open files to the maximum number for your platform and user:

ulimit -n unlimited