This server was generated by the OpenAPI Generator project. By using the OpenAPI-Spec from a remote server, the server stub was generated driven by the Específico library on top of aiohttp.
CPython 3.10.0+.
We support both 3.10 and 3.11 until a proper 3.11 gets released in Ubuntu 22.04.
To run the server, please execute the following from the root directory:
pip3 install -r requirements.txt
python3 -m athenian.api --state-db sqlite:// --metadata-db sqlite:// --precomputed-db sqlite:// --persistentdata-db sqlite:// --ui --no-google-kms
You should replace sqlite://
(in-memory zero-configuration sample DB stub) with a real
SQLAlchemy connection string.
--no-google-kms
disables the second authentication method (API keys) which relies on Google Key Management Service
and which you are probably not eager to setup.
and open your browser to here:
http://localhost:8080/v1/ui/
Your OpenAPI definition lives here:
http://localhost:8080/v1/openapi.json
To launch the integration tests, use pytest:
sudo pip install -r requirements-test.txt
pytest
Prometheus monitoring: http://localhost:8080/status
.
Memory usage statistics: http://localhost:8080/memory
.
Generating admin invitations:
ATHENIAN_INVITATION_KEY=secret python3 -m athenian.api.invite_admin sqlite://
Replace sqlite://
with the actual DB endpoint and secret
with the actual passphrase.
Running with real Cloud SQL databases:
cloud_sql_proxy -instances=athenian-1:us-east1:owl-cloud-sql-2f803bb6=tcp:5432
--metadata-db=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/metadata
--state-db=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/state
--precomputed-db=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/precomputed
--persistentdata-db=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/persistentdata
Validating the metadata schema:
python3 -m athenian.api.models.metadata postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/metadata
Install the linters:
pip install -r server/requirements-lint.txt
Validate your changes:
cd server
tests/run_static_checks.sh # (must be run on a clean git tree)
Generate metadata SQL dump suitable for unit tests:
python3 server/tests/mdb_transfer.py postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/metadata >test_data.sql
Generate sample SQLite metadata and state databases:
docker run --rm -e DB_DIR=/io -v$(pwd):/io --entrypoint python3 athenian/api /server/tests/gen_mock_db.py
You should have three SQLite files in $(pwd)
: mdb-master.sqlite
, pdb-master.sqlite
, and sdb-master.sqlite
.
Obtain Auth0 credentials for running locally: webapp docs.
Code must be formatted according to custom rules implemented by Chorny, a formatter derived from Black.
The required code style cannot be enforced / checked with a single command but must be applied by calling two commands in sequence:
add-trailing-comma --py36-plus my_module.py
chorny my_module.py
export AUTH0_DOMAIN=...
export AUTH0_AUDIENCE=...
export AUTH0_CLIENT_ID=...
export AUTH0_CLIENT_SECRET=...
export OVERRIDE_MDB=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost:5432/db_name
cd server
pytest -s
Likewise, there are OVERRIDE_SDB
and OVERRIDE_PDB
.
Do not set any of those overrides to staging or, for god's sake, production endpoints!
You will wipe the state and the precomputed objects!
You can also use the services provided by the compose file through the unittest
Makefile target:
DATABASE=postgres VERBOSITY=-vv make unittest
To run a specific test:
DATABASE=postgres VERBOSITY=-vv TEST=tests/controllers/test_team_controller.py make unittest
Alternatively, you can locally build and run the docker image:
# Build the API image
make docker-build
# Run the API container
make run-api
And open http://localhost:8080/v1/ui
If you want to run your own API image, use instead:
# Run the API container
IMAGE=your_api_image:tag make run-api
You can erase the API data fixtures created by make run-api
with:
make clean
You can also run the api server and all the other services with real data using docker compose. For this setup you'll need the following:
- docker compose,
- an
.env
file setup with also all the credentials for Auth0, - gloud locally setup and authenticated with Athenian's email,
- the credentials for the CloudSQL databases and the name of the instances.
This works by using the staging data. Currently we have the following databases:
state
,precomputed
,metadata
,persistentdata
state
, precomputed
, and persistentdata
are dumped and restored into a local postgres running in docker compose. Since the metadata
database is big, in order to ease the process instead of dumping and restoring it, it is accessed directly through cloud_sql_proxy
(at the cost of performance due to network latency).
Here are the steps:
- spin up
cloud_sql_proxy
andpostgres
:
$ CLOUD_SQL_INSTANCE=<staging db instance> docker-compose up cloud_sql_proxy postgres
- load the data into
postgres
:
$ docker-compose exec -e POSTGRES_SOURCE_USER=<staging db user> -e POSTGRES_SOURCE_PASSWORD=<staging db password> postgres /load_data.sh
- run the api:
$ CLOUD_SQL_INSTANCE=<staging db instance> REMOTE_POSTGRES_USER=<staging db user> REMOTE_POSTGRES_PASSWORD=<staging db passowrd> docker-compose up api
The API will be accessible at post 8080
.
In case it stops working, it's probably caused by a schema change. In that case, just re-pull the API docker image, destroy everything (docker-compose down -v
) and repeat the previous steps.
API supports automatic authorization on behalf of the "default user" ATHENIAN_DEFAULT_USER
.
You need to generate a regular accoutn invitation and accept it while being authorized as @gkwillie.
Let's suppose there is a super admin [email protected]
and a regular user [email protected]
.
[email protected]
logs in as usual.- Call
/v1/become?id=auth0|[email protected]
- A new record in the DB appears that maps
[email protected]
(God.user_id
) to[email protected]
(God.mapped_id
). - Any subsequent request from
[email protected]
is first handled as normal, so Auth0 checks whether the user is[email protected]
. - However, in the end, we check whether
[email protected]
is a god. If he is, we look up the mapped ID in the DB. - We query the mgmt Auth0 API to fetch the full profile of the mapped user -
[email protected]
. - We overwrite the user field of request and additionally set the extra attribute god_id to indicate that the user is a mapped god.
- API handlers think that the user is
[email protected]
. - But
/v1/become
checks user.god_id and if it exists, it is used in the DB god check instead of the regular user.id. Thus we don't lose the ability to turn into any other user, including the empty string (None, the initial default unmapped state).
The only way to mark a user as a super admin is to directly hack the DB. You need to know the internal GitHub user integer identifier, for example, from here. Execute the following in the state DB:
insert into gods (user_id) values ('github|<<<github id>>>');
You can set SENTRY_PROJECT
and SENTRY_KEY
environment variables to automatically send the local server crashes to Sentry.
If you're running the API with docker (using make run-api
from above), you should stop the server, add the Sentry values into the .env
file that will be in the root folder of athenian-api
, and start the server again (with make run-api
).
SENTRY_ENV
sets the environment.
That should be touched only for the real deployments.
Optionally, specify ATHENIAN_DEV_ID
to identify yourself in Sentry reports.
After the first generation, add edited files to .openapi-generator-ignore to prevent generator to overwrite them. Typically:
server/controllers/*
test/*
*.txt