Skip to content

API Tests Quick Start Guide

shivahari edited this page Sep 12, 2024 · 43 revisions

API automation offers a sweet spot between unit testing and GUI automation, it is fast, robust and much easier to maintain than GUI automation. Most web applications these days have RESTful interfaces which makes them suitable aspirants for API automation. Qxf2's Player-Interface pattern based API Framework enables creating API automation tests against these RESTful applications, it does so by abstracting over the endpoints using Endpoints layer, the Interface layer collects all endpoint abstractions & the Player layer maintains business logic, the Test communicates with the Player layer to run validations.


Setup

Follow these steps to setup your test environment, these steps will work on unix environments, if you are using Windows use relevant commands.

  1. Create a Python 3.x virtual environment using venv and activate it
# Create a virtual environment
python -m venv <virutual_env_location>
# Activate the virtual environment
source <virtual_env_location>/bin/activate
  1. Install the required Python modules using pip
# Navigate to project root
cd qxf2-page-object-model
# Install modules
python -m pip install -r requirements.txt
  1. Run the test using pytest
# Run API test
python -m pytest tests/test_api_example.py

Important

Please raise an issue if these step did not work for you


API Automation Framework

We created a RESTful app using Flask to sample our API Automation Framework, this app features /cars, /users & /register endpoints. A pictorial depiction of the Player-Interface pattern applied against this app will look like this:

API Automation Framework

The Test when run validates the business logic in the API_Player file. The API_Player file calls upon the API_Interface to run supported HTTP methods against the endpoints available in the <endpoint>_API_Endpoints file.

Note

The Framework supports creating synchronous and Asynchronous tests.

Directory Structure

The endpoints directory houses the files for API Automation Framework and the test files are present in the tests directory.

endpoints
|-- API_Interface.py
|-- API_Player.py
|-- Base_API.py
|-- Cars_API_Endpoints.py
|-- Registration_API_Endpoints.py
|-- User_API_Endpoints.py
|-- __init__.py
tests
|-- __init__.py
|-- test_api_async_example.py
|-- test_api_example.py

API Automation Test

Synchronous test

Here is a sample synchronous test we have written for the Cars-API app:

"""

API EXAMPLE TEST
1. Add new car - POST request(without url_params)
2. Get all cars - GET request(without url_params)
3. Verify car count
4. Update newly added car details -PUT request
5. Get car details -GET request(with url_params)
6. Register car - POST request(with url_params)
7. Get list of registered cars -GET
8. Verify registered cars count
9. Delete newly added car -DELETE request
"""

import os
import sys
import pytest
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from endpoints.API_Player import API_Player
from conf import api_example_conf as conf
from conf import base_url_conf
from conftest import interactivemode_flag

@pytest.mark.API
def test_api_example(test_api_obj):
    "Run api test"
    try:
        expected_pass = 0
        actual_pass = -1

        # set authentication details
        username = conf.user_name
        password = conf.password
        auth_details = test_api_obj.set_auth_details(username, password)

        initial_car_count = test_api_obj.get_car_count(auth_details)


        # add cars
        car_details = conf.car_details
        result_flag = test_api_obj.add_car(car_details=car_details,
                                            auth_details=auth_details)
        test_api_obj.log_result(result_flag,
                                positive='Successfully added new car with details %s' % car_details,
                                negative='Could not add new car with details %s' % car_details)



        # Get Cars and verify if new car is added
        result_flag = test_api_obj.get_cars(auth_details)

        result_flag = test_api_obj.verify_car_count(expected_count=initial_car_count+1,
                                                auth_details=auth_details)
        test_api_obj.log_result(result_flag,
                            positive='Total car count matches expected count',
                            negative='Total car count doesnt match expected count')

        # Update car
        update_car = conf.update_car
        update_car_name = conf.car_name_2
        result_flag = test_api_obj.update_car(auth_details=auth_details,
                                            car_name=update_car_name,
                                            car_details=update_car)
        test_api_obj.log_result(result_flag,
                            positive='Successfully updated car : %s' % update_car_name,
                            negative='Couldnt update car :%s' % update_car_name)

        # Get one car details
        new_car = conf.car_name_1
        brand = conf.brand
        result_flag = test_api_obj.get_car(auth_details=auth_details,
                                        car_name=new_car,
                                        brand=brand)
        test_api_obj.log_result(result_flag,
                            positive='Successfully fetched car details of car : %s' % new_car,
                            negative='Couldnt fetch car details of car :%s' % new_car)

        # Register car
        customer_details = conf.customer_details
        result_flag = test_api_obj.register_car(auth_details=auth_details,
                                            car_name=new_car,
                                            brand=brand)
        test_api_obj.log_result(result_flag,
                            positive='Successfully registered new car %s with customer details %s' % (new_car, customer_details),
                            negative='Couldnt register new car %s with cutomer details %s' % (new_car, customer_details))

        #Get Registered cars and check count
        result_flag = test_api_obj.get_registered_cars(auth_details)
        register_car_count = test_api_obj.get_regi_car_count(auth_details)

        result_flag = test_api_obj.verify_registration_count(expected_count=register_car_count,
                                                            auth_details=auth_details)
        test_api_obj.log_result(result_flag,
                            positive='Registered count matches expected value',
                            negative='Registered car count doesnt match expected value')

        # Remove newly added car
        result_flag = test_api_obj.remove_car(auth_details=auth_details,
                                            car_name=update_car_name)
        test_api_obj.log_result(result_flag,
                            positive='Successfully deleted car %s' % update_car,
                            negative='Could not delete car %s ' % update_car)

        # validate if car is deleted
        result_flag = test_api_obj.verify_car_count(expected_count=initial_car_count,
                                                auth_details=auth_details)
        test_api_obj.log_result(result_flag,
                            positive='Total car count matches expected count after deleting one car',
                            negative='Total car count doesnt match expected count after deleting one car')

        # Deleting registered car
        test_api_obj.delete_registered_car(auth_details)

        # test for validation http error 403
        result = test_api_obj.check_validation_error(auth_details)

        test_api_obj.log_result(not result['result_flag'],
                            positive=result['msg'],
                            negative=result['msg'])

        # test for validation http error 401 when no authentication
        auth_details = None
        result = test_api_obj.check_validation_error(auth_details)
        test_api_obj.log_result(not result['result_flag'],
                            positive=result['msg'],
                            negative=result['msg'])

        # test for validation http error 401 for invalid authentication
        # set invalid authentication details
        username = conf.invalid_user_name
        password = conf.invalid_password
        auth_details = test_api_obj.set_auth_details(username, password)
        result = test_api_obj.check_validation_error(auth_details)
        test_api_obj.log_result(not result['result_flag'],
                            positive=result['msg'],
                            negative=result['msg'])

        # write out test summary
        expected_pass = test_api_obj.total
        actual_pass = test_api_obj.passed
        test_api_obj.write_test_summary()

    except Exception as e:
        print(e)
        if base_url_conf.api_base_url == 'http://127.0.0.1:5000':
            test_api_obj.write("Please run the test against https://cars-app.qxf2.com/ by changing the api_base_url in base_url_conf.py")
            test_api_obj.write("OR")
            test_api_obj.write("Clone the repo 'https://github.com/qxf2/cars-api.git' and run the cars_app inorder to run the test against your system")

        else:
            test_api_obj.write("Exception when trying to run test:%s" % __file__)
            test_api_obj.write("Python says:%s" % str(e))

    # Assertion
    assert expected_pass == actual_pass,"Test failed: %s"%__file__

The test is named test_<scenario>.py and the function is named test_<scenario>, this is required for pytest to collect and run the test function. A pytest-marker is used to mark the test, this sample test is marked API, using python -m pytest -m API will run this specific test or tests that are marked API. It has a module doc-string to list the steps validated, running python -m pytest -m API --collect-only displays useful information about the test. Finally, the assert statement marks the end of the test, it validates if all steps passed.

Asynchronous test

An Asynchronous test is a suitable choice if you wish to run a few validations to test the reliability and performance of the RESTful application. Here is a sample Asynchronous test:

"""
API Async EXAMPLE TEST
This test collects tasks using asyncio.TaskGroup object \
and runs these scenarios asynchronously:
1. Get the list of cars
2. Add a new car
3. Get a specifi car from the cars list
4. Get the registered cars
"""

import asyncio
import os
import sys
import pytest
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from conf import api_example_conf

@pytest.mark.asyncio
# Skip running the test if Python version < 3.11
@pytest.mark.skipif(sys.version_info < (3,11),
                    reason="requires Python3.11 or higher")
async def test_api_async_example(test_api_obj):
    "Run api test"
    try:
        expected_pass = 0
        actual_pass = -1

        # set authentication details
        username = api_example_conf.user_name
        password = api_example_conf.password
        auth_details = test_api_obj.set_auth_details(username, password)

        # Get an existing car detail from conf
        existing_car = api_example_conf.car_name_1
        brand = api_example_conf.brand
        # Get a new car detail from conf
        car_details = api_example_conf.car_details

        async with asyncio.TaskGroup() as group:
            get_cars = group.create_task(test_api_obj.async_get_cars(auth_details))
            add_new_car = group.create_task(test_api_obj.async_add_car(car_details=car_details,
                                                                       auth_details=auth_details))
            get_car = group.create_task(test_api_obj.async_get_car(auth_details=auth_details,
                                                                   car_name=existing_car,
                                                                   brand=brand))
            get_reg_cars = group.create_task(test_api_obj.async_get_registered_cars(auth_details=auth_details))

        test_api_obj.log_result(get_cars.result(),
                                positive="Successfully obtained the list of cars",
                                negative="Failed to get the cars")
        test_api_obj.log_result(add_new_car.result(),
                                positive=f"Successfully added new car {car_details}",
                                negative="Failed to add a new car")
        test_api_obj.log_result(get_car.result(),
                                positive=f"Successfully obtained a car - {existing_car}",
                                negative="Failed to add a new car")
        test_api_obj.log_result(get_reg_cars.result(),
                                positive="Successfully obtained registered cars",
                                negative="Failed to get registered cars")
        # write out test summary
        expected_pass = test_api_obj.total
        actual_pass = test_api_obj.passed
        test_api_obj.write_test_summary()
        # Assertion
        assert expected_pass == actual_pass,f"Test failed: {__file__}"

    except Exception as e:
        test_api_obj.write(f"Exception when trying to run test: {__file__}")
        test_api_obj.write(f"Python says: {str(e)}")

The test requires the pytest-asyncio plugin and should be marked with the asyncio marker. It collects validation tasks using asyncio.TaskGroup and executes them asynchronously, so the task order is always random. For the Cars-API application, a scenario where a car is added and then the addition is verified isn't a valid async test because the two tasks depend on each other. For more information on Asynchronous API Automation test refer - Asynchronous API Automation testing using Qxf2’s Framework